path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
04 - Clustering.ipynb
###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output C:\Users\91965\Anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:882: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1. f"KMeans is known to have a memory leak on Windows " ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This is the only metric for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This is the only metric for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be though of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at the Palmer Islands penguin dataset, which contains measurements of penguins.Let's start by examining a dataset that contains observations of multiple classes. We'll use a dataset that contains observations of three different species of penguin.> **Citation**: The penguins dataset used in the this exercise is a subset of data collected and made available by [Dr. KristenGorman](https://www.uaf.edu/cfos/people/faculty/detail/kristen-gorman.php)and the [Palmer Station, Antarctica LTER](https://pal.lternet.edu/), amember of the [Long Term Ecological ResearchNetwork](https://lternet.edu/). ###Code import pandas as pd # load the training dataset (dropping rows with nulls) penguins = pd.read_csv('data/penguins.csv').dropna() # Display a random sample of 10 observations (just the features) penguin_features = penguins[penguins.columns[0:4]] penguin_features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains four data points (or *features*) for each instance (*observation*) of an penguin. So you could interpret these as coordinates that describe each instance's location in four-dimensional space.Now, of course four dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the four dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(penguin_features[penguins.columns[0:4]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) penguins_2d = pca.transform(scaled_features) penguins_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(penguins_2d[:,0],penguins_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Penguin Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the Iris data points kmeans.fit(penguin_features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our penguin data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the iris data and predict the cluster assignments for each data point km_clusters = model.fit_predict(penguin_features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(penguins_2d, km_clusters) ###Output _____no_output_____ ###Markdown So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the penguin data, the different species of penguin are already known, so we can use the class labels identifying the species to plot the class assignments and compare them to the clusters identified by our unsupervised algorithm ###Code penguin_species = penguins[penguins.columns[4]] plot_clusters(penguins_2d, penguin_species.values) ###Output _____no_output_____ ###Markdown There may be some differences in the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the penguin observations so that birds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the penguin data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(penguin_features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(penguins_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be though of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] scaled_features ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is similar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be though of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at the Palmer Islands penguin dataset, which contains measurements of penguins.Let's start by examining a dataset that contains observations of multiple classes. We'll use a dataset that contains observations of three different species of penguin.> **Citation**: The penguins dataset used in the this exercise is a subset of data collected and made available by [Dr. KristenGorman](https://www.uaf.edu/cfos/people/faculty/detail/kristen-gorman.php)and the [Palmer Station, Antarctica LTER](https://pal.lternet.edu/), amember of the [Long Term Ecological ResearchNetwork](https://lternet.edu/). ###Code import pandas as pd from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA import matplotlib.pyplot as plt from sklearn.cluster import KMeans, AgglomerativeClustering %matplotlib inline penguins = pd.read_csv('data/penguins.csv').dropna() penguin_features = penguins[penguins.columns[0:4]] penguin_features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains four data points (or *features*) for each instance (*observation*) of an penguin. So you could interpret these as coordinates that describe each instance's location in four-dimensional space.Now, of course four dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the four dimensional feature values into two-dimensional coordinates. ###Code penguin_features = MinMaxScaler().fit_transform(penguin_features) pca = PCA(n_components=2) pca.fit(penguin_features) penguins_2d = pca.transform(penguin_features) penguins_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code plt.scatter(penguins_2d[:,0],penguins_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Penguin Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code wcss = [] for i in range(1, 11): kmeans = KMeans(i) kmeans.fit(penguin_features) wcss.append(kmeans.inertia_) plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticeable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into _K_ clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligible movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our penguin data with a K value of 3. ###Code model = KMeans(3, n_init=20) km_clusters = model.fit_predict(penguin_features) km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): colors = ['blue', 'green', 'orange'] markers = ['*', 'x', '+'] for s in range(len(samples)): plt.scatter( samples[s][0], samples[s][1], s=100, color=colors[clusters[s]], marker=markers[clusters[s]], ) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(penguins_2d, km_clusters) ###Output _____no_output_____ ###Markdown The clusters look reasonably well separated.So what's the practical use of clustering? In some cases, you may have data that you need to group into distinct clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the penguin data, the different species of penguin are already known, so we can use the class labels identifying the species to plot the class assignments and compare them to the clusters identified by our unsupervised algorithm ###Code penguin_species = penguins[penguins.columns[4]] plot_clusters(penguins_2d, penguin_species.values) ###Output _____no_output_____ ###Markdown There may be some differences in the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the penguin observations so that birds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidean or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the penguin data using an agglomerative clustering algorithm. ###Code agg_model = AgglomerativeClustering(3) agg_clusters = agg_model.fit_predict(penguin_features) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code plot_clusters(penguins_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('https://raw.githubusercontent.com/claudiur-deloitte/ml-basics/master/data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This is the only metric for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at the Iris dataset, one of the most famous samples in data science. This contains measurements of iris flowers. ###Code from sklearn import datasets iris = datasets.load_iris() print(iris.feature_names) iris.data[0:10] ###Output _____no_output_____ ###Markdown As you can see, the dataset contains four data points (or *features*) for each instance (*observation*) of an iris. So you could interpret these as coordinates that describe each instance's location in four-dimensional space.Now, of course four dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the four dimensional features into two-dimensional coordinates. ###Code from sklearn.decomposition import PCA pca = PCA(n_components=2).fit(iris.data) iris_2d = pca.transform(iris.data) iris_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(iris_2d[:,0],iris_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Iris Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the Iris data points kmeans.fit(iris.data) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our iris data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=10, max_iter=100) # Fit to the iris data and predict the cluster assignments for each data point km_clusters = model.fit_predict(iris.data) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional iris data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(iris_2d, km_clusters) ###Output _____no_output_____ ###Markdown The clusters look reasonably well separated.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the iris data, the different species of iris are already known, so we can use the class labels identifying the species to plot the class assignments and compare them to the clusters identified by our unsupervised algorithm ###Code plot_clusters(iris_2d, iris.target) ###Output _____no_output_____ ###Markdown There may be some slight differences in the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the iris observations so that flowers of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the iris data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(iris.data) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(iris_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be though of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at the Palmer Islands penguin dataset, which contains measurements of penguins.Let's start by examining a dataset that contains observations of multiple classes. We'll use a dataset that contains observations of three different species of penguin.> **Citation**: The penguins dataset used in the this exercise is a subset of data collected and made available by [Dr. KristenGorman](https://www.uaf.edu/cfos/people/faculty/detail/kristen-gorman.php)and the [Palmer Station, Antarctica LTER](https://pal.lternet.edu/), amember of the [Long Term Ecological ResearchNetwork](https://lternet.edu/). ###Code import pandas as pd # load the training dataset (dropping rows with nulls) penguins = pd.read_csv('data/penguins.csv').dropna() # Display a random sample of 10 observations (just the features) penguin_features = penguins[penguins.columns[0:4]] penguin_features.sample(10) ###Output ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'] ###Markdown As you can see, the dataset contains four data points (or *features*) for each instance (*observation*) of an penguin. So you could interpret these as coordinates that describe each instance's location in four-dimensional space.Now, of course four dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the four dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale penguin_features[penguins.columns[0:4]] = MinMaxScaler().fit_transform(penguin_features[penguins.columns[0:4]]) # Get two principal components pca = PCA(n_components=2).fit(penguin_features.values) penguins_2d = pca.transform(penguin_features.values) penguins_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(penguins_2d[:,0],penguins_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Penguin Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the Iris data points kmeans.fit(penguin_features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our iris data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=20, max_iter=200) # Fit to the iris data and predict the cluster assignments for each data point km_clusters = model.fit_predict(penguin_features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional iris data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(penguins_2d, km_clusters) ###Output _____no_output_____ ###Markdown The clusters look reasonably well separated.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the penguin data, the different species of penguin are already known, so we can use the class labels identifying the species to plot the class assignments and compare them to the clusters identified by our unsupervised algorithm ###Code penguin_species = penguins[penguins.columns[4]] plot_clusters(penguins_2d, penguin_species.values) ###Output _____no_output_____ ###Markdown There may be some differences in the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the penguin observations so that birds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the iris data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(penguin_features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(penguins_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be though of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at the Iris dataset, one of the most famous samples in data science. This contains measurements of iris flowers. ###Code from sklearn import datasets iris = datasets.load_iris() print(iris.feature_names) iris.data[0:10] ###Output ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'] ###Markdown As you can see, the dataset contains four data points (or *features*) for each instance (*observation*) of an iris. So you could interpret these as coordinates that describe each instance's location in four-dimensional space.Now, of course four dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the four dimensional features into two-dimensional coordinates. ###Code from sklearn.decomposition import PCA pca = PCA(n_components=2).fit(iris.data) iris_2d = pca.transform(iris.data) iris_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(iris_2d[:,0],iris_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Iris Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the Iris data points kmeans.fit(iris.data) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our iris data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=10, max_iter=100) # Fit to the iris data and predict the cluster assignments for each data point km_clusters = model.fit_predict(iris.data) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional iris data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(iris_2d, km_clusters) ###Output _____no_output_____ ###Markdown The clusters look reasonably well separated.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the iris data, the different species of iris are already known, so we can use the class labels identifying the species to plot the class assignments and compare them to the clusters identified by our unsupervised algorithm ###Code plot_clusters(iris_2d, iris.target) ###Output _____no_output_____ ###Markdown There may be some slight differences in the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the iris observations so that flowers of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the iris data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(iris.data) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(iris_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be though of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This is the only metric for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be though of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This is the only metric for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be though of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at the Palmer Islands penguin dataset, which contains measurements of penguins.Let's start by examining a dataset that contains observations of multiple classes. We'll use a dataset that contains observations of three different species of penguin.> **Citation**: The penguins dataset used in the this exercise is a subset of data collected and made available by [Dr. KristenGorman](https://www.uaf.edu/cfos/people/faculty/detail/kristen-gorman.php)and the [Palmer Station, Antarctica LTER](https://pal.lternet.edu/), amember of the [Long Term Ecological ResearchNetwork](https://lternet.edu/). ###Code import pandas as pd # load the training dataset (dropping rows with nulls) penguins = pd.read_csv('data/penguins.csv').dropna() # Display a random sample of 10 observations (just the features) penguin_features = penguins[penguins.columns[0:4]] penguin_features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains four data points (or *features*) for each instance (*observation*) of an penguin. So you could interpret these as coordinates that describe each instance's location in four-dimensional space.Now, of course four dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the four dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(penguin_features[penguins.columns[0:4]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) penguins_2d = pca.transform(scaled_features) penguins_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(penguins_2d[:,0],penguins_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Penguin Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the Iris data points kmeans.fit(penguin_features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our penguin data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the iris data and predict the cluster assignments for each data point km_clusters = model.fit_predict(penguin_features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(penguins_2d, km_clusters) ###Output _____no_output_____ ###Markdown So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the penguin data, the different species of penguin are already known, so we can use the class labels identifying the species to plot the class assignments and compare them to the clusters identified by our unsupervised algorithm ###Code penguin_species = penguins[penguins.columns[4]] plot_clusters(penguins_2d, penguin_species.values) ###Output _____no_output_____ ###Markdown There may be some differences in the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the penguin observations so that birds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the penguin data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(penguin_features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(penguins_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This is the only metric for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be though of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at the Iris dataset, one of the most famous samples in data science. This contains measurements of iris flowers. ###Code from sklearn import datasets iris = datasets.load_iris() print(iris.feature_names) iris.data[0:10] ###Output _____no_output_____ ###Markdown As you can see, the dataset contains four data points (or *features*) for each instance (*observation*) of an iris. So you could interpret these as coordinates that describe each instance's location in four-dimensional space.Now, of course four dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the four dimensional features into two-dimensional coordinates. ###Code from sklearn.decomposition import PCA pca = PCA(n_components=2).fit(iris.data) iris_2d = pca.transform(iris.data) iris_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(iris_2d[:,0],iris_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Iris Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the Iris data points kmeans.fit(iris.data) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our iris data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=10, max_iter=100) # Fit to the iris data and predict the cluster assignments for each data point km_clusters = model.fit_predict(iris.data) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional iris data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(iris_2d, km_clusters) ###Output _____no_output_____ ###Markdown The clusters look reasonably well separated.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the iris data, the different species of iris are already known, so we can use the class labels identifying the species to plot the class assignments and compare them to the clusters identified by our unsupervised algorithm ###Code plot_clusters(iris_2d, iris.target) ###Output _____no_output_____ ###Markdown There may be some slight differences in the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the iris observations so that flowers of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the iris data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(iris.data) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(iris_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() # features ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be though of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at the Palmer Islands penguin dataset, which contains measurements of penguins.Let's start by examining a dataset that contains observations of multiple classes. We'll use a dataset that contains observations of three different species of penguin.> **Citation**: The penguins dataset used in the this exercise is a subset of data collected and made available by [Dr. KristenGorman](https://www.uaf.edu/cfos/people/faculty/detail/kristen-gorman.php)and the [Palmer Station, Antarctica LTER](https://pal.lternet.edu/), amember of the [Long Term Ecological ResearchNetwork](https://lternet.edu/). ###Code import pandas as pd # load the training dataset (dropping rows with nulls) penguins = pd.read_csv('data/penguins.csv').dropna() # Display a random sample of 10 observations (just the features) penguin_features = penguins[penguins.columns[0:4]] penguin_features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains four data points (or *features*) for each instance (*observation*) of an penguin. So you could interpret these as coordinates that describe each instance's location in four-dimensional space.Now, of course four dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the four dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale penguin_features[penguins.columns[0:4]] = MinMaxScaler().fit_transform(penguin_features[penguins.columns[0:4]]) # Get two principal components pca = PCA(n_components=2).fit(penguin_features.values) penguins_2d = pca.transform(penguin_features.values) penguins_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(penguins_2d[:,0],penguins_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Penguin Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the Iris data points kmeans.fit(penguin_features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our penguin data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=20, max_iter=200) # Fit to the iris data and predict the cluster assignments for each data point km_clusters = model.fit_predict(penguin_features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(penguins_2d, km_clusters) ###Output _____no_output_____ ###Markdown The clusters look reasonably well separated.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the penguin data, the different species of penguin are already known, so we can use the class labels identifying the species to plot the class assignments and compare them to the clusters identified by our unsupervised algorithm ###Code penguin_species = penguins[penguins.columns[4]] plot_clusters(penguins_2d, penguin_species.values) ###Output _____no_output_____ ###Markdown There may be some differences in the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the penguin observations so that birds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the penguin data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(penguin_features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(penguins_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be though of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components=2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____ ###Markdown ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science). ###Code import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10) ###Output _____no_output_____ ###Markdown As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates. ###Code from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components = 2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] #js: complete decomposition pca = PCA().fit(scaled_features) #js print(pca.explained_variance_) print(pca.explained_variance_ratio_) #js: cumulative explain variance ratio pca.explained_variance_ratio_.cumsum() #js: how many components do I need if I want alleast 95% of the variance to be explained pca = PCA(.95) pca.fit(scaled_features) pca.n_components_ ###Output _____no_output_____ ###Markdown Now that we have the data points translated to two dimensions, we can visualize them in a plot: ###Code import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show() ###Output _____no_output_____ ###Markdown Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model. ###Code #importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # kmeans.fit(scaled_features) # why not on these? # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() ###Output _____no_output_____ ###Markdown The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3. ###Code from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters ###Output _____no_output_____ ###Markdown Let's see those cluster assignments with the two-dimensional data points. ###Code def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters) ###Output _____no_output_____ ###Markdown Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm ###Code seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values) ###Output _____no_output_____ ###Markdown There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm. ###Code from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters ###Output _____no_output_____ ###Markdown So what do the agglomerative cluster assignments look like? ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters) ###Output _____no_output_____
pytorch_tutorials/load_data.ipynb
###Markdown **Example of reading in data using pyTorch, and getting used to handling tensors, data loaders, etc. And finally visualising the data** This is taken from the pyTorch tutorials: https://pytorch.org/tutorials/recipes/recipes/loading_data_recipe.html We will load in some data in pyTorch, and get used to the datasets it has to offer ###Code pip install torchaudio import torch import torchaudio torchaudio.datasets.YESNO( root='/usr/local/', # need to enter the path to where to download the dataset url = 'http://www.openslr.org/resources/1/waves_yesno.tar.gz', folder_in_archive='waves_yesno', download=True, transform=None, target_transform=None ) # Let's look at a datapoint in the YESNO dataset, which consists of # 60 recordings of an individidual saying yes or no in Hebrew. # Each recording is eight (8) words long. yesno_data_trainset = torchaudio.datasets.YESNO(root='./', download=True) # pick the first datapoint to see an example of the data n = 1 waveform, sample_rate, labels = yesno_data_trainset[1] print(f"Waveform: \n {waveform}") # we want to look at the size of the waveform/tensor above, # using pyTorch, we use the .size() or .shape method to do this print(f"Waveform dimensions: {waveform.size()}") print(f"Waveform dimensions: {waveform.shape}") print(sample_rate) print(labels) ###Output [0, 0, 0, 1, 0, 0, 0, 1] ###Markdown As is customary, we will split the data into a "training" and "testing" dataset, so we can test the model on the "test" dataset after training it using the "training" dataset, to assess the model's performance. ###Code data_load = torch.utils.data.DataLoader(yesno_data_trainset, batch_size=1, shuffle=True) type(data_load) ###Output _____no_output_____ ###Markdown Iterate over the data, which is an iteratable after using data_loader. The data is converted to tensors containing the waveform, sample rate, and labels. ###Code for data in data_load: print(f"Data: {data} \n") print(f"Waveform: {data[0]} \n" f"Sample rate: {data[1]} \n" f"Labels: {data[2]}") break ###Output Data: [tensor([[[-0.0008, -0.0009, -0.0010, ..., 0.0053, 0.0047, 0.0034]]]), tensor([8000]), [tensor([0]), tensor([0]), tensor([1]), tensor([1]), tensor([0]), tensor([1]), tensor([0]), tensor([0])]] Waveform: tensor([[[-0.0008, -0.0009, -0.0010, ..., 0.0053, 0.0047, 0.0034]]]) Sample rate: tensor([8000]) Labels: [tensor([0]), tensor([0]), tensor([1]), tensor([1]), tensor([0]), tensor([1]), tensor([0]), tensor([0])] ###Markdown Visualise the data ###Code import matplotlib.pyplot as plt print(data[0][0].numpy()) # converted from tensor to numpy ###Output [[-0.00082397 -0.00094604 -0.0010376 ... 0.00527954 0.00466919 0.00335693]] ###Markdown Let's plot what the waveform looks like ###Code plt.figure() plt.plot(waveform.t().numpy()) ###Output _____no_output_____
intro-to-pytorch/Part 7 - Loading Image Data (Exercises).ipynb
###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/home/aioz-giang/Desktop/Learn/Pytorch/Learn-pytorch/data/Cat_Dog_data/Cat_Dog_data/train' transform = transforms.Compose( [transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()] )# TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform)# TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True)# TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '/home/aioz-giang/Desktop/Learn/Pytorch/Learn-pytorch/data/Cat_Dog_data/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose( [ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ] ) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transforms = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform = transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) !curl -O https://raw.githubusercontent.com/x-cloud/deep-learning-v2-pytorch/master/intro-to-pytorch/helper.py %run ./helper.py # Run this to test your data loader images, labels = next(iter(dataloader)) # helper.imshow(images[0], normalize=False) imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform = transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size = 32, shuffle = True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomResizedCrop(244), transforms.RandomRotation(45), transforms.RandomVerticalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(244), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms from tqdm import tqdm import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False); ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' normalize_transform = transforms.Normalize([.5], [.5]) # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(255), transforms.RandomHorizontalFlip(), transforms.Grayscale(), transforms.ToTensor(), normalize_transform]) test_transforms = transforms.Compose([transforms.Resize(288), transforms.CenterCrop(255), transforms.Grayscale(), transforms.ToTensor(), normalize_transform]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) batch_size = 64 num_workers = 8 trainloader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers) testloader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): axes[ii].imshow(images[ii, 0]) axes[ii].axis('off') ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() n_input = 255 ** 2 n_hidden1 = 1024 n_hidden2 = 512 n_hidden3 = 128 n_output = 2 self.fc1 = nn.Linear(n_input, n_hidden1) self.fc2 = nn.Linear(n_hidden1, n_hidden2) self.fc3 = nn.Linear(n_hidden2, n_hidden3) self.output = nn.Linear(n_hidden3, n_output) self.dropout = nn.Dropout(p=0.2) def forward(self, x): x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = self.output(x) return x, F.softmax(x, dim=1) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = Classifier().to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) n_epoch = 30 train_losses, test_losses = [], [] for epoch in range(n_epoch): cum_train_loss = 0. cum_test_loss = 0. cum_accuracy = 0. for images, labels in tqdm(trainloader): images, labels = images.to(device), labels.to(device) logits, _ = model(images) loss = criterion(logits, labels) optimizer.zero_grad() loss.backward() optimizer.step() cum_train_loss += loss.item() else: model.eval() with torch.no_grad(): for images, labels in tqdm(testloader): images, labels = images.to(device), labels.to(device) logits, ps = model(images) loss = criterion(logits, labels) cum_test_loss += loss.item() _, top_class = ps.topk(1, dim=1) cum_accuracy += (labels == top_class.view(*labels.shape)).float().mean() model.train() train_loss = cum_train_loss / len(trainloader) test_loss = cum_test_loss / len(testloader) accuracy = cum_accuracy / len(testloader) train_losses.append(train_loss) test_losses.append(test_loss) print("Epoch: {:3}/{:3}".format(epoch + 1, n_epoch), "Train Loss: {:.3f}".format(train_loss), "Test Loss: {:.3f}".format(test_loss), "Test Accuracy: {:3f}%".format(accuracy)) plt.plot(train_losses, label="Train losses") plt.plot(test_losses, label="Test losses") plt.legend() plt.show(); ###Output _____no_output_____ ###Markdown Table of Contents1&nbsp;&nbsp;Loading Image Data1.0.1&nbsp;&nbsp;Transforms1.0.2&nbsp;&nbsp;Data Loaders1.1&nbsp;&nbsp;Data Augmentation Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/Users/ctatwawadi/Downloads/Cat_Dog_data/train' #/Users/ctatwawadi/Downloads/Cat_Dog_data transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '/Users/ctatwawadi/Downloads/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'cats_dogs/train' transform = transforms.Compose([transforms.Resize(255), transforms.RandomRotation(15), transforms.CenterCrop(200), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'cats_dogs' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.Resize(255), transforms.RandomRotation(20), transforms.CenterCrop(200), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(200), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32, shuffle=True) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # Compose transforms here transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) # Create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # Create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(45), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(p=0.5), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper import numpy as np from pathlib import Path ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code path = Path('/storage/demyanchuk/') data_dir = path/'Cat_Dog_data/train' transform = transforms.Compose(transforms=[ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) dataset = datasets.ImageFolder(root=data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) means = {0:[], 1:[], 2:[]} stds = {0:[], 1:[], 2:[]} for ims, _ in dataloader: for i in range(3): means[i].append(ims[:,i,].mean()) stds[i].append(ims[:,i,].std()) means_list = [np.mean(means[i]) for i in means] stds_list = [np.mean(stds[i]) for i in stds] means_list, stds_list ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code means_list, data_dir = path/'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(20), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0), transforms.ToTensor(), transforms.Normalize(means_list, stds_list) ]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(means_list, stds_list) ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir/'train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir/'test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=True) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper !wget https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip !unzip Cat_Dog_data.zip ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) def imshow(image, ax=None, title=None, normalize=True): """Imshow for Tensor.""" if ax is None: fig, ax = plt.subplots() image = image.numpy().transpose((1, 2, 0)) if normalize: mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) image = std * image + mean image = np.clip(image, 0, 1) ax.imshow(image) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.tick_params(axis='both', length=0) ax.set_xticklabels('') ax.set_yticklabels('') return ax # Run this to test your data loader images, labels = next(iter(dataloader)) imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir,transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset,batch_size=32,shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor() ]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code import os data_dir = os.path.abspath(os.path.join(os.getcwd(), '../datasets/cat-and-dog/training_set/training_set/')) transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code train_dir = os.path.abspath(os.path.join(os.getcwd(), '../datasets/cat-and-dog/training_set/training_set/')) test_dir = os.path.abspath(os.path.join(os.getcwd(), '../datasets/cat-and-dog/training_set/training_set/')) # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(train_dir, transform=train_transforms) test_data = datasets.ImageFolder(test_dir, transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn import torch.nn.functional as F class Network(nn.Module): def __init__(self): super().__init__(self) self.fc1 = nn.Linear(244*244, 2048) self.fc2 = nn.Linear(2048, 1024) self.fc3 = nn.Linear(1024, 512) self.fc4 = nn.Linear(512, 128) self.fc5 = nn.Linear(128, 64) self.out = nn.Linear(64, 2) self.dropout = nn.Dropout(0.2) def forward(self, x): x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = self.dropout(F.relu(self.fc4(x))) x = self.dropout(F.relu(self.fc5(x))) x = F.log_softmax(self.out(x)) return x ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code #Import drive from google.colab import drive #Mount Google Drive drive.mount("/content/drive") %cd '/content/drive/My Drive/Udacity/deep-learning-v2-pytorch/intro-to-pytorch/' %pwd #import helper.py import imp helper = imp.new_module('helper') exec(open("./helper.py").read(), helper.__dict__) %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms #import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/content/drive/My Drive/Cat_Dog_Data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform = transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size = 64, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '/content/drive/My Drive/Cat_Dog_Data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(15), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.RandomVerticalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=64) testloader = torch.utils.data.DataLoader(test_data, batch_size=64) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn images, labels = next(iter(trainloader)) print(images[0].shape) print(images.view(images.shape[0],-1).shape) model = nn.Sequential(nn.Linear(150528, 1048), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(1048, 64), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(64, 2), nn.LogSoftmax(dim = 1)) model from torch import optim criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr = 0.003) epochs = 30 train_loss_list = [] val_loss_list = [] for e in range(epochs): print(f'Epoch {e + 1}') train_loss = 0 for images, labels in trainloader: images = images.view(images.shape[0], -1) optimizer.zero_grad() out = model(images) loss = criterion(out, labels) train_loss += loss.item() loss.backward optimizer.step() print('here') else: val_loss = 0 accuracy = 0 with torch.no_grad(): model.eval() for images, labels in testloader: images = images.view(images.shape[0], -1) print('here2') out = model(images) loss = criterion(out, labels) val_loss += loss.item() top_p, top_class = torch.topk(torch.exp(out), 1, dim = 1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_loss_list.append(train_loss/len(trainloader)) val_loss_list.append(val_loss/len(testloader)) print(f'Epoch {e+1}-- Accuracy: {(accuracy.item() * 100)/len(testloader)}% Train Loss: {train_loss/len(trainloader)} Test Loss: {val_loss/len(testloader)}') model.train() ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/Users/seongjaeryu/dataset/Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform = transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size = 64, shuffle = True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '/Users/seongjaeryu/dataset/Cat_Dog_data' # TODO: Define transforms for the training data and testing data # training data: typically do data augmentatiation train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code image, label = next(iter(trainloader)) image.shape image.shape[-1]*image.shape[-2]*image.shape[-3] # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn # Network architecture n_classes = 2 model = nn.Sequential(nn.Linear(224, image.shape[-1]*image.shape[-2]*image.shape[-3]), nn.ReLU(), nn.Linear(224, 112), nn.ReLU(), nn.Linear(112, 56), nn.ReLU(), nn.Linear(56, n_classes), nn.LogSoftmax(dim=1) ) from torch import optim criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.03) epochs = 1 steps = 0 size_train = len(trainloader) size_test = len(testloader) batch_size = 32 channels = 3 train_losses, test_losses = [], [] for epoch in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() output = model(images) loss = criterion(output, labels) loss.backward() optimizer.step() running_loss += loss else: test_loss = 0 accuracy = 0 with torch.nograd(): for images, labels in testloader: output = model(images) test_loss += criterion(output, labels) output = torch.exp(output) _, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/size_train) test_losses.append(test_loss/size_test) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt plt.plot(train_losses, label='Training loss') plt.plot(test_losses, label='Validation loss') plt.legend(frameon=False) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'data/' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5,0.5,0.5])]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(40), transforms.RandomResizedCrop(64), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([ transforms.Resize(64), transforms.CenterCrop(64), transforms.ToTensor() ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(12288,256) self.fc3 = nn.Linear(256, 122) self.fc4 = nn.Linear(122, 64) self.fc5 = nn.Linear(64, 32) self.fc6 = nn.Linear(32, 2) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc3(x))) x = self.dropout(F.relu(self.fc4(x))) x = self.dropout(F.relu(self.fc5(x))) x = F.log_softmax(self.fc6(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_losses[-1]), "Test Loss: {:.3f}.. ".format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) print(224*224*3) ###Output 150528 ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255) , transforms.CenterCrop(224) , transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30) ,transforms.RandomCrop(224) ,transforms.RandomHorizontalFlip() ,transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255) ,transforms.CenterCrop(224) ,transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn from torch import optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() # define hidden layers self.sc1 = nn.Linear() ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code from torch import nn, optim import torch.nn.functional as F img_example, img_value = next(iter(train_data)) print(img_example.shape) print(img_example.reshape(img_example.shape[0], -1).shape) class MyModel(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528, 500) self.fc2 = nn.Linear(500, 128) self.fc3 = nn.Linear(128, 2) self.dropout = nn.Dropout(p=0.3) def layer(self, function, x): return self.dropout(F.relu(function(x))) def forward(self, x): x = x.reshape(1, -1) x = self.layer(self.fc1, x) x = self.layer(self.fc2, x) return F.log_softmax(self.fc3(x)) model = MyModel() criterion = nn.NLLLoss() opt = optim.ASGD(model.parameters(), lr=0.001) epochs = 1 train_losses = [] test_losses = [] accuracies = [] for e in range(epochs): print(f'Start of epoch {e}') train_epoch_loss = 0 for image, label in train_data: opt.zero_grad() output = model.forward(image) # output shape should be [len(images), n_classes] train_loss = criterion(output, torch.LongTensor([label])) # second element shape should be [len(images)] train_loss.backward() train_epoch_loss += train_loss.item() opt.step() else: with torch.no_grad(): test_epoch_loss = 0 test_epoch_accuracy = 0 for image, label in test_data: log_ps = model.forward(image) output = torch.exp(log_ps) test_epoch_loss += criterion(log_ps, torch.Tensor(label)) equals = output == labels.reshape(*output.shape) test_epoch_accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(train_epoch_loss/len(train_data)) test_losses.append(test_epoch_loss/len(test_data)) accuracies.append(test_epoch_accuracy/len(test_data)) print(f'End of epoch {e}') ###Output Start of epoch 0 ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'data/Cat_Dog_data/train' transform = transforms.Compose( [ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ] ) dataset = datasets.ImageFolder( data_dir, transform=transform ) dataloader = torch.utils.data.DataLoader( dataset, batch_size=64, shuffle = True ) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'data/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code import os os.environ["KMP_DUPLICATE_LIB_OK"] = 'True' %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = train_transforms # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528, 512) self.fc2 = nn.Linear(512, 256) self.fc3 = nn.Linear(256, 128) self.fc4 = nn.Linear(128, 64) self.fc5 = nn.Linear(64, 10) self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = self.dropout(F.relu(self.fc4(x))) x = F.log_softmax(self.fc5(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: ## TODO: Implement the validation pass and print out the validation accuracy val_loss = 0 accuracy = 0 with torch.no_grad(): model.eval() for imgs, labels in testloader: log_probs = model(imgs) val_loss += criterion(log_probs, labels) probs = torch.exp(log_probs) top_p, top_class = probs.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(val_loss/len(testloader)) model.train() print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)), "Test Loss: {:.3f}.. ".format(val_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) # Denied. ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) # data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset import fc_model import importlib importlib.reload(fc_model) input_size = images.view(images.shape[0], -1).shape[1] model = fc_model.Network(input_size, 10, [512, 256, 128]) criterion = torch.nn.NLLLoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001) # for images, labels in trainloader: # images = images.view(images.shape[0], -1); # print(images.shape) # break fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=2) ###Output Epoch: 1/2.. Training Loss: 0.073.. Test Loss: 365.145.. Test Accuracy: 0.494 Epoch: 1/2.. Training Loss: 0.000.. Test Loss: 385.587.. Test Accuracy: 0.494 Epoch: 1/2.. Training Loss: 0.000.. Test Loss: 386.004.. Test Accuracy: 0.494 Epoch: 1/2.. Training Loss: 0.000.. Test Loss: 386.010.. Test Accuracy: 0.494 Epoch: 1/2.. Training Loss: 0.000.. Test Loss: 385.995.. Test Accuracy: 0.494 ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torch.utils.data import DataLoader from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = DataLoader(dataset, batch_size=64, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) print(labels) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) images, labels = next(data_iter) print(labels) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ## TODO: Define your model with dropout added from torch import nn, optim import torch.nn.functional as F class MyClassifier(nn.Module): def __init__(self): super().__init__() self.layer1 = nn.Linear(150528, 256) self.layer2 = nn.Linear(256, 128) self.layer3 = nn.Linear(128, 64) self.layer4 = nn.Linear(64, 32) self.layer5 = nn.Linear(32, 1) self.dropout = nn.Dropout(p=0.3) def forward(self, x): x = x.view(x.shape[0], -1) x = self.layer1(x) x = F.relu(x) x = self.dropout(x) x = self.layer2(x) x = F.relu(x) x = self.dropout(x) x = self.layer3(x) x = F.relu(x) x = self.dropout(x) x = self.layer4(x) x = F.relu(x) x = self.dropout(x) x = self.layer5(x) x = F.sigmoid(x) return x import time model = MyClassifier() model.cpu() criterion = nn.BCELoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epoch = 1 images, labels = next(data_iter) output = model(images) # labels = labels.view((32)) # output = output.view((32)) print(labels.shape) print(output.shape) loss = criterion(output, labels.type(torch.FloatTensor)) loss output.shape ## TODO: Train your model with dropout, and monitor the training progress with the validation loss and accuracy for device in ['cpu', 'cuda']: train_losses = [] test_losses = [] criterion = nn.BCELoss() optimizer = optim.Adam(model.parameters(), lr=0.003) model.to(device) for e in range(epoch): train_loss = 0 train_accuracy = 0 test_loss = 0 test_accuracy = 0 start = time.time() for images, labels in trainloader: optimizer.zero_grad() output = model(images.to(device)) loss = criterion(output, labels.type(torch.FloatTensor).to(device)) train_loss += loss # top_p, top_class = torch.topk(output, 1, dim = 1) # correct = top_class == labels.cuda(0).view(*top_class.shape) # accuracy = torch.mean(correct.type(torch.FloatTensor)) predictions = (output >= 0.5) correct = predictions.type(torch.FloatTensor).to(device) == labels.to(device).view(*predictions.shape) accuracy = torch.mean(correct.type(torch.FloatTensor)) train_accuracy += accuracy loss.backward() optimizer.step() else: print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds, at train") with torch.no_grad(): model.eval() start = time.time() for images, labels in testloader: output = model(images.to(device)) loss = criterion(output, labels.type(torch.FloatTensor).to(device)) test_loss += loss predictions = (output >= 0.5) correct = predictions.type(torch.FloatTensor).to(device) == labels.to(device).view(*predictions.shape) accuracy = torch.mean(correct.type(torch.FloatTensor)) test_accuracy += accuracy print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds, at test") model.train() test_losses.append(test_loss) train_losses.append(train_loss) print("epoch:", e) print("train loss:", train_loss.item(), "test loss:", test_loss.item()) print("train accuracy:", train_accuracy.item() / len(trainloader), "test accuracy:", test_accuracy.item() / len(testloader)) ###Output C:\Users\Mohammed\.conda\envs\dl_env\lib\site-packages\torch\nn\functional.py:1569: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead. warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.") C:\Users\Mohammed\.conda\envs\dl_env\lib\site-packages\torch\nn\modules\loss.py:516: UserWarning: Using a target size (torch.Size([32])) that is different to the input size (torch.Size([32, 1])) is deprecated. Please ensure they have the same size. return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction) C:\Users\Mohammed\.conda\envs\dl_env\lib\site-packages\torch\nn\modules\loss.py:516: UserWarning: Using a target size (torch.Size([4])) that is different to the input size (torch.Size([4, 1])) is deprecated. Please ensure they have the same size. return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction) ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'assets/Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'assets/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])] ) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])] ) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code from torch import nn, optim import torch.nn.functional as F import numpy as np class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 2) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) # output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x def calc_accuracy( predicted, labels ): with torch.no_grad(): pred_acc, predic_class = predicted.topk( 1, dim =1 ) equals = ( predic_class == labels.view( *predic_class.shape ) ) accuracy = torch.mean( equals.type( torch.FloatTensor ) ) return accuracy.item()*100 def train_model( model, criteion, optmizer, epochs, verbose=False, device='cpu' ): train_losses, test_losses = [], [] model.to( device ) for e in range(epochs): model.train() for ii, (inputs, labels) in enumerate(trainloader): # Move input and label tensors to the GPU inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() log_ps = model(inputs) loss = criterion(log_ps, labels) loss.backward() optimizer.step() train_losses.append( loss.item() ) if( verbose ): print( '\rloss:', loss.item() ) else: with torch.no_grad(): model.eval() for ii, (inputs, labels) in enumerate(testloader): # Move input and label tensors to the GPU inputs, labels = inputs.to(device), labels.to(device) log_ps = model( inputs ) loss = criterion(log_ps, labels) predicted = torch.exp( log_ps ) accuracy = calc_accuracy( predicted, labels ) test_losses.append( loss.item() ) if( verbose ): print( f'Epoch: {e}') print( f'Training loss: {np.mean( train_losses )}' ) print( f'Validation loss: {np.mean( test_losses )}' ) print( f'---######################---\n') return train_losses, test_losses modela = Classifier() optimizer = optim.Adam(modela.parameters(), lr=0.003) criterion = nn.NLLLoss() epochs = 30 train_losses_1, test_losses_1 = train_model( modela, criterion, optimizer, epochs, verbose=True, device='cuda' ) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.RandomResizedCrop(224), transforms.ToTensor()])# TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform)# TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=true)# TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.RandomResizedCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset import torch from torchvision import datasets, transforms from torch import nn from torch import optim import torch.nn.functional as F import helper data_dir = 'Cat_Dog_data' # Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.Resize(784), transforms.RandomRotation(30), transforms.RandomResizedCrop(28), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.RandomResizedCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 784) self.fc2 = nn.Linear(784, 256) self.fc3 = nn.Linear(256, 128) self.fc4 = nn.Linear(128, 64) self.fc5 = nn.Linear(64, 32) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = self.dropout(F.relu(self.fc4(x))) # output so no dropout here x = F.log_softmax(self.fc5(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) images, labels = next(iter(trainloader)) log_ps = model(images[1]) print(images[1].shape) print(log_ps.shape) print(labels.shape) ###Output torch.Size([3, 28, 28]) torch.Size([3, 32]) torch.Size([32]) ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/Users/ashabb/Downloads/Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])# TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform = transform)# TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size = 32, shuffle=True)# TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '~/Downloads/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.Resize(255), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class Net(nn.Module): def __init__(self): super(Net, self).__init__() conv1 = nn.Conv2d(3, 16, 3, padding = 1) maxpool = nn.MaxPool2d(2, 2) def forward(self, x): x = self.maxpool(F.relu(self.conv1(x))) return x catdogclassifier = Net() #set loss function criterion = nn.CrossEntropyLoss() #set optimizer optimizer = optim.SGD(, lr = 0.0001, momentum = 0.9) params = catdogclassifier.parameters() len([p for p in params]) #train network for epoch in range(2): for i, data in enumerate(train_loader): training_data, training_labels = data optimizer.zero_grad #forward output = catdogclassifer(training_data) loss = criterion(output, training_labels) #backward loss.backward() optimizer.step() #data generating functions import numpy as np import math def get_sample(sample_size): x = np.linspace(-np.pi, np.pi, sample_size) gaussian_noise = np.random.normal(scale = 0.5, size = sample_size) random_shift = np.random.uniform(low = 0, high = 50) sinx = np.sin(x+random_shift) noisy_sinx = sinx + gaussian_noise return sinx, noisy_sinx def sample_generator(num_samples): counter= 0 while counter<num_samples: yield get_sample(100) counter += 1 sinx, noisy_sinx = get_sample(100) sinx1, noisy_sinx1 = get_sample(100) # visualize import matplotlib.pyplot as plt %matplotlib inline plt.plot(sinx, label = "sin") plt.plot(noisy_sinx, label = "noisy sin") plt.plot(sinx1, label = "sin1") plt.plot(noisy_sinx1, label = "noisy sin1") plt.legend() ##can we train an RNN to predict the original signal? import torch #generate training dataset true_signal_train = [] input_signal_train = [] for t, i in training_data: true_signal_train.append(torch.Tensor(t)) input_signal_train.append(torch.Tensor(i)) #generate test dataset true_signal_test = [] input_signal_test = [] for t, i in training_data: true_signal_test.append(torch.Tensor(t)) input_signal_test.append(torch.Tensor(i)) true_signal_train[0] import torch.nn as nn class SinDataset(torch.utils.data.IterableDataset): def __init__(self, data_generator): self.data_generator = data_generator def __iter__(self): return self.data_generator #create dataset from numpy arrays #train_dataset = torch.utils.data.TensorDataset(true_signal_train) #test_dataset = torch.utils.data.TensorDataset(true_signal_test) training_data = sample_generator(10000) test_data = sample_generator(1000) train_dataset = SinDataset(training_data) test_dataset = SinDataset(test_data) train_loader = DataLoader(train_dataset, shuffle = True, batch_size = 32) test_loader = DataLoader(test_dataset, shuffle = True, batch_size = 32) #create vanilla RNN class VanillaRNN(nn.Module): def __init__(self, input_size): self.rnn = nn.RNN(input_size = sequence_length, hidden_size = sequence_length, output_size = sequence_length, batch_first = True) def forward(self): self.rnn(x) #train vanilla RNN #evaluate #add layers to RNN #try a different optimizer nw=torch.utils.data.get_worker_info() nw ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform = transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size = 63, shuffle = True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([.5, .5, .5], [.5, .5, .5])]) test_transforms = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([.5, .5, .5], [.5, .5, .5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code from torch import nn import torch.nn.functional as F from torch import optim fake_list = [100, 200, 300, 500] # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset class MyNet(nn.Module): def __init__(self, input_size, fc1, fc2, fc3, classes): super().__init__() self.fc1 = nn.Linear(input_size, fc1) self.fc2 = nn.Linear(fc1, fc2) self.fc3 = nn.Linear(fc2, fc3) self.output = nn.Linear(fc3, classes) self.dropout = nn.Dropout(p = .2) def forward(self, x): x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = F.log_softmax(self.output(x), dim = 1) return x classes = len(trainloader.dataset.classes) input_size = 224*224*3 fc1 = 512 fc2 = 256 fc3 = 128 model = MyNet(input_size, fc1, fc2, fc3, classes) criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters()) epochs = 10 acc_history, val_acc_history, train_losses, test_losses = [], [], [], [] for e in range(epochs): acc = 0 running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) ps = torch.exp(log_ps) loss = criterion(log_ps, labels) _, top_class = ps.topk(1, dim = 1) equals = top_class == labels.view(*top_class.shape) acc = torch.mean(equals.type(torch.FloatTensor)) loss.backward() optimizer.step() running_loss += loss.item() else: val_acc = 0 val_loss = 0 with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) ps = torch.exp(log_ps) loss = criterion(log_ps, labels) _, top_class = ps.topk(1, dim = 1) equals = top_class == labels.view(*top_class.shape) val_acc = torch.mean(equals.type(torch.FloatTensor)) val_loss += loss.item() model.train() acc_history.append(acc.item()) val_acc_history.append(val_acc.item()) train_losses.append(running_loss/len(trainloader)) test_losses.append(val_loss/len(testloader)) print("Epoch: {}/{}".format(e+1, epochs)) print("loss: {}, val_loss: {}, acc: {}, val_acc: {}".format(running_loss/len(trainloader), val_loss/len(testloader), acc.item(), val_acc.item())) ###Output _____no_output_____ ###Markdown images are too big using fc layers ###Code ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform_train = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform_train)# TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform = transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size = 32, shuffle = True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.RandomResizedCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output False ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'C:/Users/Asus/ML/datasets/Cat_Dog_data/Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'C:/Users/Asus/ML/datasets/Cat_Dog_data/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,10), ncols=10) for ii in range(10): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255),transforms.CenterCrop(224), transforms.ToTensor()]); dataset = datasets.ImageFolder(data_dir, transform) dataloader = torch.utils.data.DataLoader(dataset,batch_size=34,shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(25), transforms.RandomHorizontalFlip(), transforms.RandomResizedCrop(244), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(244), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(255), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.Resize(28), transforms.CenterCrop(28), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.25, 0.25, 0.25], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(28), transforms.CenterCrop(28), transforms.ToTensor(), transforms.Normalize([0.25, 0.25, 0.25], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class ClassifierDropout(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784*3, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = F.log_softmax(self.fc4(x), dim=1) return x model = ClassifierDropout() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # turn off gradients with torch.no_grad(): # set model to evaluation mode model.eval() # validation pass here for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels).item() ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) # set model back to train mode model.train() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt plt.plot(train_losses, label='Training loss') plt.plot(test_losses, label='Validation loss') plt.legend(frameon=False) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code import os os.getcwd() import torch from torchvision import datasets, transforms data_dir = 'Cat_Dog_data/Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform = transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size = 32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output Bad key axes.color_cycle in file /Users/mohamedabdelbary/.matplotlib/matplotlibrc, line 240 ('axes.color_cycle : 348ABD, A60628, 7A68A6, 467821,D55E00, CC79A7, 56B4E9, 009E73, F0E442, 0072B2 # color cycle for plot lines') You probably need to get an updated matplotlibrc file from https://github.com/matplotlib/matplotlib/blob/v3.3.3/matplotlibrc.template or from the matplotlib source distribution ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/Users/mohamedabdelbary/Documents/Dev/deep-learning-v2-pytorch/dogs-vs-cats/' transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '/Users/mohamedabdelbary/Documents/Dev/deep-learning-v2-pytorch/dogs-vs-cats' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])# TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir,transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.RandomRotation(30), transforms.Resize(256), transforms.CenterCrop(200), transforms.ToTensor()]) dataset = datasets.ImageFolder('./Cat_Dog_data', transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(128), transforms.RandomHorizontalFlip(), transforms.Grayscale(3), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(150), transforms.CenterCrop(128), transforms.Grayscale(3), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=8, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=8, shuffle=True) # change this to the trainloader or testloader images, labels = iter(trainloader).next() fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Attempt to build a network to classify cats vs dogs from this dataset images[0].shape images.shape images.view(images.shape[0],3,-1).shape images.view(images.shape[0],-1).shape images[:,1,:].shape images[:,1,:].view(images[:,1,:].shape[0],-1).shape # build a classifier from torch import nn import torch.nn.functional as F from torch import optim class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(16384, 512) self.fc2 = nn.Linear(512, 48) self.fc3 = nn.Linear(48, 2) self.dropout = nn.Dropout(p=0.02) def forward(self, X): X = self.dropout(F.relu(self.fc1(X))) X = self.dropout(F.relu(self.fc2(X))) X = F.log_softmax(self.fc3(X), dim=1) return X model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) torch.cuda.empty_cache() model.cuda() epochs = 10 test_losses, train_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: images = images[:,1,:].view(images[:,1,:].shape[0],-1) images, labels = images.cuda(), labels.cuda() optimizer.zero_grad() output = model(images) loss = criterion(output, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 with torch.no_grad(): model.eval() for images, labels in testloader: images = images[:,1,:].view(images[:,1,:].shape[0],-1) images, labels = images.cuda(), labels.cuda() log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_ps, top_pred = ps.topk(1, dim=1) matches = top_pred == labels.view(*top_pred.shape) accuracy += torch.mean(matches.type(torch.float)) model.train() test_losses.append(test_loss / len(testloader)) train_losses.append(running_loss / len(trainloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_losses[-1]), "Test Loss: {:.3f}.. ".format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) ###Output Epoch: 1/10.. Training Loss: 0.718.. Test Loss: 0.693.. Test Accuracy: 0.500 Epoch: 2/10.. Training Loss: 0.693.. Test Loss: 0.693.. Test Accuracy: 0.500 Epoch: 3/10.. Training Loss: 0.693.. Test Loss: 0.693.. Test Accuracy: 0.501 Epoch: 4/10.. Training Loss: 0.693.. Test Loss: 0.693.. Test Accuracy: 0.500 Epoch: 5/10.. Training Loss: 0.700.. Test Loss: 0.693.. Test Accuracy: 0.500 Epoch: 6/10.. Training Loss: 0.694.. Test Loss: 0.693.. Test Accuracy: 0.500 Epoch: 7/10.. Training Loss: 0.694.. Test Loss: 0.693.. Test Accuracy: 0.500 Epoch: 8/10.. Training Loss: 0.695.. Test Loss: 0.693.. Test Accuracy: 0.500 Epoch: 9/10.. Training Loss: 0.694.. Test Loss: 0.693.. Test Accuracy: 0.500 Epoch: 10/10.. Training Loss: 0.693.. Test Loss: 0.694.. Test Accuracy: 0.499 ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) #data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.RandomResizedCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])# TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)# TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) print("Label: {}".format(labels[0])) ###Output Label: 0 ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.Resize(255), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/home/prudhvi/fullstackproject/dogs-vs-cats/train/' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir,transform=transform) dataloader = torch.utils.data.DataLoader(dataset,batch_size=16,shuffle=False) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '/home/prudhvi/fullstackproject/dogs-vs-cats' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.ToTensor(), transforms.RandomResizedCrop(224), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) # test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) # testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,6), ncols=6) for ii in range(6): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(244), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(244), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/home/harut/Projects/PyTorchCourse/deep-learning-v2-pytorch/intro-to-pytorch/Cat_Dog_data' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) print(labels) labels ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = './Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor() ]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(root=data_dir+"/train", transform=train_transforms) test_data = datasets.ImageFolder(root=data_dir + "/test1", transform=test_transforms) print(len(train_data)) print(len(test_data)) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) print(labels[ii]) ###Output tensor(0) tensor(0) tensor(0) tensor(0) ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code images.view(images.shape[0], -1).shape from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528, 50176) self.fc2 = nn.Linear(50176, 25088) self.fc3 = nn.Linear(25088, 6272) self.fc4 = nn.Linear(6272, 1568) self.fc5 = nn.Linear(1568, 1) self.dropout = nn.Dropout(p=0.2) def forward(self, x): x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = self.dropout(F.relu(self.fc4(x))) x = F.log_softmax(self.fc5(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 10 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(running_loss / len(trainloader)) test_losses.append(test_loss / len(testloader)) print("Epoch: {} / {} . .".format(e+1, epochs), "Training Loss: {:.3f} . .".format(train_losses[-1]), "Test Loss: {:.3f} . .". format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), ]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), ]) test_transforms = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(224*224*224, 4096) self.fc2 = nn.Linear(4096, 1024) self.fc3 = nn.Linear(1024, 256) self.fc4 = nn.Linear(256, 64) self.fc5 = nn.Linear(64, 2) self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # dropout fully-connected layers x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = self.dropout(F.relu(self.fc4(x))) # output x = F.log_softmax(self.fc5(x), dim=1) return x ## TODO: Train your model with dropout, and monitor the training progress with the validation loss and accuracy model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model.forward(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: with torch.no_grad(): model.eval() test_loss = 0 accuracy = 0 for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() epoch_train_loss = running_loss / len(trainloader) epoch_test_loss = test_loss / len(testloader) epoch_accuracy = accuracy / len(testloader) train_losses.append(epoch_train_loss) test_losses.append(epoch_test_loss / len(testloader)) print(' '.join([ f'Epoch: {e+1:2d}', f'Train: {epoch_train_loss:.3f}', f'Test: {epoch_test_loss:.3f}', f'Accuracy: {epoch_accuracy.item()*100:.2f}%', ])) ## Model keeps crashing on my laptop. ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)# TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5]) ]) test_transforms = transforms.Compose([ transforms.RandomResizedCrop(224), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5]) ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code import os os.listdir('D:\\Documents\\pytorch-course\\Cat_Dog_data\\train') data_dir = 'D:\\Documents\\pytorch-course\\Cat_Dog_data\\train' transforms_ = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transforms_) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'D:\\Documents\\pytorch-course\\Cat_Dog_data\\' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + 'train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + 'test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code # http://pytorch.org/ from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision import torch %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt #import torch from torchvision import datasets, transforms !rm helper.py !wget -nc https://raw.githubusercontent.com/amandaleonel/deep-learning-v2-pytorch/master/intro-to-pytorch/helper.py import helper ###Output rm: cannot remove 'helper.py': No such file or directory --2018-12-14 11:48:30-- https://raw.githubusercontent.com/amandaleonel/deep-learning-v2-pytorch/master/intro-to-pytorch/helper.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 2813 (2.7K) [text/plain] Saving to: ‘helper.py’ helper.py 100%[===================>] 2.75K --.-KB/s in 0s 2018-12-14 11:48:30 (47.7 MB/s) - ‘helper.py’ saved [2813/2813] ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code !wget -nc https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip !unzip -q Cat_Dog_data.zip !ls Cat_Dog_data data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader !pip -q install Pillow==4.0.0 !pip -q install image images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt # http://pytorch.org/ from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision import torch import torch from torchvision import datasets, transforms !wget -c https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/intro-to-pytorch/helper.py import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(244), transforms.ToTensor() ]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor() ]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '../../../Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(244), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir,transform = transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '../../../Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) data_dir = '../../../Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.Resize(200), transforms.CenterCrop(180), transforms.RandomRotation(30), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(200), transforms.CenterCrop(180), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class MyModel(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(180*180*3,256) self.fc2 = nn.Linear(256,128) self.fc3 = nn.Linear(128,64) self.fc4 = nn.Linear(64,1) self.relu = nn.ReLU() self.sigmoid = nn.Sigmoid() def forward(self, x): x = x.view(x.shape[0],-1) x = self.relu(self.fc1(x)) x = self.relu(self.fc2(x)) x = self.relu(self.fc3(x)) x = self.sigmoid(self.fc4(x)) return(x) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print('Using device:', device) print() #Additional Info when using cuda if device.type == 'cuda': print(torch.cuda.get_device_name(0)) print('Memory Usage:') print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB') print('Cached: ', round(torch.cuda.memory_cached(0)/1024**3,1), 'GB') # torch.cuda.current_device() # torch.cuda.set_device(0) # torch.cuda.empty_cache() # next(model.parameters()).is_cuda torch.cuda.init() model = MyModel().to(device) criterion = nn.BCELoss() optimizer = optim.Adam(model.parameters(), lr=0.001) model epochs = 2 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() ps = model(images.to(device)) loss = criterion(ps.type(torch.FloatTensor).to(device), labels.type(torch.FloatTensor).to(device)) loss.backward() optimizer.step() running_loss += loss.item() else: ## TODO: Implement the validation pass and print out the validation accuracy test_loss = 0 test_acc = 0 diff_sum = 0 with torch.no_grad(): for images_test, labels_test in testloader: images_test = images_test.to(device) labels_test = labels_test.to(device) test_ps = model(images_test) test_loss = test_loss + criterion(test_ps, labels_test) top_prop, top_class = test_ps.topk(1, dim=1) diff = (top_class == labels_test.view(*top_class.shape)).type(torch.FloatTensor) diff_sum = diff_sum + torch.sum(diff) test_acc = diff_sum/len(testloader) # test_acc = torch.mean(diff) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(test_acc)) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'C:/Users/sbharati/Downloads/Cat_Dog_data/Cat_Dog_data' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(255), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'C:/Users/sbharati/Downloads/Cat_Dog_data/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.Resize(255), transforms.CenterCrop(250), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(250), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) print(images.shape) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(187500, 15000) self.fc2 = nn.Linear(15000, 5000) self.fc3 = nn.Linear(5000, 1000) self.fc4 = nn.Linear(1000, 50) self.fc5 = nn.Linear(50, 2) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = self.dropout(F.relu(self.fc4(x))) # output so no dropout here x = F.log_softmax(self.fc5(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model.forward(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model.forward(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_losses[-1]), "Test Loss: {:.3f}.. ".format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code !wget https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip !unzip Cat_Dog_data.zip data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ColorJitter(brightness=0.10, contrast=0.10, saturation=0.05, hue=0.05), transforms.RandomAffine((-0.2,0.2), translate=(0.0,0.20), scale=(0.70,1.00), interpolation=transforms.InterpolationMode.BILINEAR), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) plt.imshow(images[0].numpy().transpose([1,2,0])) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ColorJitter(brightness=0.10, contrast=0.10, saturation=0.05, hue=0.05), transforms.RandomAffine((-0.2,0.2), translate=(0.0,0.30), scale=(0.50,1.50), interpolation=transforms.InterpolationMode.BILINEAR), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] ax.imshow(images[ii].numpy().transpose([1,2,0])) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), ]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.Resize(255), transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), ]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code import torch from torchvision import datasets, transforms data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224) , transforms.ToTensor() ]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir,transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size = 32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) images[0] ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([ transforms.Resize(224), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/train', transform=test_transforms) #data_dir + '/test' trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(254), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False); ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(254), transforms.RandomHorizontalFlip(), transforms.RandomRotation(30), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5,0.5,0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(254), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([ transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor() ]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) test_transforms = transforms.Compose([ transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor() ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])# TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform = transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size = 32, shuffle = True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5],[0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5],[0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=True) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=True) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.CenterCrop(300), transforms.Resize(256), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomCrop(256, pad_if_needed=True), transforms.RandomHorizontalFlip(0.1), transforms.RandomRotation(90), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.4, 0.4, 0.4])]) test_transforms = transforms.Compose([transforms.CenterCrop(300), transforms.Resize(256), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output In C:\Users\Beefsports\Miniconda3\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test.mplstyle: The savefig.frameon rcparam was deprecated in Matplotlib 3.1 and will be removed in 3.3. In C:\Users\Beefsports\Miniconda3\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test.mplstyle: The verbose.level rcparam was deprecated in Matplotlib 3.1 and will be removed in 3.3. In C:\Users\Beefsports\Miniconda3\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test.mplstyle: The verbose.fileo rcparam was deprecated in Matplotlib 3.1 and will be removed in 3.3. ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code print(os.getcwd()) print(os.listdir('../dogs_cats_classification/train/')) data_dir = "../dogs_cats_classification" transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '../dogs_cats_classification' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) len(trainloader) * 32, len(testloader) * 32 # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/home/shreyas/.pytorch/Cat_Dog_data/PetImages/' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '/home/shreyas/.pytorch/Cat_Dog_data/PetImages/' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32,shuffle=True) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([ transforms.RandomResizedCrop(size=255, scale=(0.08, 1.0), ratio=(0.75, 1.3333333333333333), interpolation=2), transforms.RandomGrayscale(p=0.1), transforms.RandomHorizontalFlip(p=0.5), transforms.ToTensor()]) # TODONE: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODONE: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODONE: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODONE: Define transforms for the training data and testing data train_transforms = transform test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(255), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # trainloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) # testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(195075, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim=1) return x class DropoutClassifier(Classifier): def __init__(self): super().__init__() self.dropout = nn.Dropout(p=.2) def forward(self, x): x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = F.log_softmax(self.fc4(x), dim=1) return x import itertools, sys spinner = itertools.cycle(['-', '/', '|', '\\']) model = DropoutClassifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 100 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) train_loss = criterion(log_ps, labels) train_loss.backward() optimizer.step() running_loss += train_loss.item() # Implement a spinner to confirm progress sys.stdout.write(next(spinner)) # write the next character sys.stdout.flush() # flush stdout buffer (actual character display) sys.stdout.write('\b') else: ## TODONE: Implement the validation pass and print out the validation accuracy test_loss = 0 accuracy = 0 with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print(f"Epoch {e+1} of {epochs} | Train Loss: {train_losses[-1]:.3f} | Test Loss: {test_losses[-1]:.3f} | Accuracy: {(accuracy/len(testloader))*100:.3f}%") model.train() # The above training took too long (12+ hours), but the losses show that it seemed to have performed well. torch.cuda.is_available() ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '../Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(244), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir,transform=transform,) dataloader = torch.utils.data.DataLoader(dataset=dataset,batch_size=256,shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '../Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(32), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5],[0.5,0.5,0.5])]) test_transforms = transforms.Compose([transforms.Resize(32),transforms.CenterCrop(32),transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=test_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class ClassCD(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(1024*3,128) self.fc2 = nn.Linear(128,2) self.dropout = nn.Dropout(0.3) def forward(self,x): x = x.view(x.shape[0],-1) x = self.fc1(x) x = F.relu(x) x = self.dropout(x) x = self.fc2(x) x = F.log_softmax(x,dim=1) return x model = ClassCD() criteron = nn.NLLLoss() optimizer = optim.Adam(model.parameters(),lr=0.001) epochs = 2 for e in range(epochs): running_loss = 0 acc=0 model.train() for images,labels in trainloader: optimizer.zero_grad() ps = model(images) loss= criteron(ps,labels) loss.backward() optimizer.step() running_loss+=loss.item() acc+= (torch.exp(ps).argmax(dim=1)==labels).sum().item() else: print(e,'train loss',running_loss/len(trainloader),'train acc',acc/len(train_data)) running_loss=0 acc=0 model.eval() with torch.no_grad(): for images,labels in testloader: ps = model(images) loss = criteron(ps,labels) running_loss+=loss.item() acc+= (torch.exp(ps).argmax(dim=1)==labels).sum().item() else: print('val loss',running_loss/len(testloader),'val acc',acc/len(test_data)) ###Output 0 train loss 0.19831832142186945 train acc 0.9823111111111111 val loss 5.87448137923132 val acc 0.5 1 train loss 0.808754201428118 train acc 0.5087111111111111 val loss 0.7065436553351486 val acc 0.5 ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(254), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder('/Users/gkriston/Downloads/Cat_Dog_data/train',transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '/Users/gkriston/Downloads/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomResizedCrop(254), transforms.RandomRotation(24), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(254), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class NNet(nn.Module): def __init__(self): super(NNet, self).__init__() self.fc1 = nn.Linear(193548, 1024) self.fc2 = nn.Linear(1024, 512) self.fc3 = nn.Linear(512, 64) self.fc4 = nn.Linear(64,2) self.dropout = nn.Dropout(p=0.2) def forward(self,x): x=x.view(x.shape[0],193548) x=self.dropout(F.relu(self.fc1(x))) x=self.dropout(F.relu(self.fc2(x))) x=self.dropout(F.relu(self.fc3(x))) x=F.log_softmax(self.fc4(x), dim=1) return x model = NNet() constraint = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() output = model(images) loss = constraint(output, labels) loss.backward() optimizer.step() running_loss += loss.item() else: accuracy = 0 test_loss = 0 model.eval() with torch.no_grad(): for images, labels in testloader: output = model(images) tloss = constraint(output, labels) test_loss+= tloss.item() top_p, top_c = output.topk(1,dim=1) equals = top_c == labels.view(*top_c.shape) accuracy += torch.mean(equals.type(torch.Tensor)) model.train() print "Epoch: ", e+1, "/",epochs,", Train Loss: ",train_loss/len(trainloader),", Test Loss: ",test_loss/len(testloader),", Accuracy:",accuracy/len(testloader) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train/' # Compose transform transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) # Create ImageFolder dataset = datasets.ImageFolder(data_dir, transform = transform) # Create DataLoader from dataset dataloader = torch.utils.data.DataLoader(dataset, batch_size = 32, shuffle = True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # Transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) test_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) print(labels[ii]) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transforms = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = r'D:\Pictures\ML\Cat_Dog_data\train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = r'D:\Pictures\ML\Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device) # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim class Classifier(nn.Module): def __init__(self): super().__init__() self.model = nn.Sequential( nn.Linear(224*224*3, 784), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(784, 256), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(256, 128), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(128, 64), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = self.model(x) return x model = Classifier() model data_iter = iter(testloader) images, labels = next(data_iter) print(images[0].shape) flattened_img = images.view(images.shape[0], -1) print(flattened_img.shape) # batch of 32 images, each image flattened from 3 channels, 224x224 pixels print(3*224*224) criterion = nn.NLLLoss().to(device) optimizer = optim.Adam(model.model.parameters(), lr=0.003) model.to(device) epochs = 3 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: images, labels = images.to(device), labels.to(device) optimizer.zero_grad() log_ps = model(images) # ?! Training loss doesn't match loss from solution. Could it be the value # from the previous cpu training is being referenced, while the loss local to this cell # is stored in GPU memory, and not being referenced? loss = criterion(log_ps, labels) running_loss += loss.item() loss.backward() optimizer.step() else: total_test_loss = 0 total_test_correct = 0 with torch.no_grad(): for images, labels in testloader: images, labels = images.to(device), labels.to(device) log_ps = model(images) loss = criterion(log_ps, labels) total_test_loss += loss.item() ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) bool_correct = top_class == labels.view(*top_class.shape) total_test_correct += bool_correct.sum().item()# torch.mean(bool_correct.type(torch.FloatTensor)) model.train() train_loss = running_loss / len(trainloader.dataset) test_loss = total_test_loss / len(testloader.dataset) train_losses.append(train_loss) test_losses.append(test_loss) print( "Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_loss), "Test Loss: {:.3f}.. ".format(test_loss), "Test Accuracy: {:.3f}".format(total_test_correct / len(testloader.dataset))) ###Output Epoch: 1/3.. Training Loss: 0.682.. Test Loss: 2830.177.. Test Accuracy: 0.500 Epoch: 2/3.. Training Loss: 4.606.. Test Loss: 709.480.. Test Accuracy: 0.500 Epoch: 3/3.. Training Loss: 1.293.. Test Loss: 1003.663.. Test Accuracy: 0.500 ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code !pwd data_dir = '/../Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '~/Shares/local/philip/ML_Data/Udacity/DeepLearning/Cat_Dog_data/Cat_Dog_data' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir+'/train', transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) print(labels[0]) ###Output tensor(1) ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomRotation(30), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, title= 'Cat' if labels[ii].numpy==0 else 'Dog', normalize=False,) images.shape, labels.shape ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Network(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(3*224**2, 256) self.fc2 = nn.Linear(256, 64) self.fc3 = nn.Linear(64, 8) self.fc4 = nn.Linear(8, 2) self.dropout = nn.Dropout(p=0.2) def forward(self,x): x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = self.fc4(x) return x ## TODO: Train your model with dropout, and monitor the training progress with the validation loss and accuracy model = Network() criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.03) epochs = 5 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() loss = criterion(model(images), labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): model.eval() for images, labels in testloader: test_loss += criterion(model(images), labels) ps = model(images) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_losses[-1]), "Test Loss: {:.3f}.. ".format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) ###Output Epoch: 1/5.. Training Loss: 517.247.. Test Loss: 14422.202.. Test Accuracy: 0.506 Epoch: 2/5.. Training Loss: 108.583.. Test Loss: 7084.508.. Test Accuracy: 0.506 Epoch: 3/5.. Training Loss: 5.640.. Test Loss: 2.259.. Test Accuracy: 0.506 Epoch: 4/5.. Training Loss: 0.275.. Test Loss: 2.221.. Test Accuracy: 0.506 ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'C:/Users/drewc/python/Cat_Dog_data/train' transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transforms) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'C:/Users/drewc/python/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(244), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5,0.5,0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(244), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=64) testloader = torch.utils.data.DataLoader(test_data, batch_size=64) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(178608, 100) self.fc2 = nn.Linear(100, 50) self.fc3 = nn.Linear(50, 2) def forward(self, x): x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.log_softmax(self.fc3(x), dim=1) return x device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 1 steps = 0 print(len(trainloader)) model.to(device) import time train_losses, test_losses = [], [] for e in range(epochs): print('started new epoch') running_loss = 0 for images, labels in trainloader: images, labels = images.to(device), labels.to(device) start = time.time() optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() print(time.time()-start) else: ## TODO: Implement the validation pass and print out the validation accuracy with torch.no_grad(): for images, labels in testloader: log_ps = model(images) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1,dim=1) equals = top_class == labels.view(*top_class.shape) accuracy = torch.mean(equals.type(torch.FloatTensor)) print(f'Accuracy: {accuracy.item()*100}%') print(model.state_dict().keys()) images.cuda() ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' #transform = # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) transform = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) #dataset = # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) #dataloader = # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) print(labels[0]) helper.imshow(images[0], normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=128, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=64, shuffle=True) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset class Classifier(torch.nn.Module): def __init__(self): super().__init__() #self.fc1 = nn.Linear(784, 256) #self.fc2 = nn.Linear(256, 128) #self.fc3 = nn.Linear(128, 64) #self.fc4 = nn.Linear(64, 10) self.layer1 = torch.nn.Sequential( torch.nn.Conv2d(in_channels=3, out_channels=32, kernel_size=4, stride=1, padding=2), torch.nn.ReLU(), torch.nn.MaxPool2d(kernel_size=2, stride=2)) self.layer2 = torch.nn.Sequential( torch.nn.Conv2d(32, 64, kernel_size=5, stride=1, padding=2), torch.nn.ReLU(), torch.nn.MaxPool2d(kernel_size=2, stride=2)) self.drop_out = torch.nn.Dropout() #self.fc1 = torch.nn.Linear(7 * 7 * 64, 1000) self.fc1 = torch.nn.Linear(200704, 1000) self.fc2 = torch.nn.Linear(1000, 2) self.output = torch.nn.LogSoftmax(dim=1) def forward(self, x): # make sure input tensor is flattened #x = x.view(x.shape[0], -1) #x = F.relu(self.fc1(x)) #x = F.relu(self.fc2(x)) #x = F.relu(self.fc3(x)) #x = F.log_softmax(self.fc4(x), dim=1) #return x out = self.layer1(x) out = self.layer2(out) out = out.reshape(out.size(0), -1) #print("out.shape: {}".format(out.shape)) #torch.Size([32, 200704]) out = self.drop_out(out) out = self.fc1(out) out = self.fc2(out) out = self.output(out) return out model = Classifier() criterion = torch.nn.NLLLoss() #criterion = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.003) device = torch.device("cuda", 0) if torch.cuda.is_available() else torch.device("cpu") print("using device: {}".format(device)) model.to(device) epochs = 3 for e in range(epochs): running_loss = 0 for images, labels in trainloader: images = images.to(device) labels = labels.to(device) optimizer.zero_grad() logits = model(images) loss = criterion(logits, labels) loss.backward() optimizer.step() running_loss += loss.item() else: print(f"Epoch: {e}, Training loss: {running_loss/len(trainloader)}") # Import helper module (should be in the repo) import helper # Test out your network! model.eval() with torch.no_grad(): for images, labels in testloader: img = images.to(device)[0] label = labels.to(device)[0] img = img.view(1, 3, 224, 224) #print(img.shape) logps = model.forward(img) output = torch.exp(logps) # Plot the image and probabilities #helper.view_classify(img.cpu(), ps.cpu(), version='Fashion') display_img = img.cpu().view(3,224,224).numpy().transpose(1,2,0) #imgplot = plt.imshow(img.view(3,224,224).cpu()) print("output: {}".format(output)) print("label: {}".format(label)) imgplot = plt.imshow(display_img) break model.train() ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/Users/fabsta/.kaggle/competitions/dogs-vs-cats/Cat_Dog_data/train' transform = transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32,shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = '/Users/fabsta/.kaggle/competitions/dogs-vs-cats/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(size=255), transforms.CenterCrop(254), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(254), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(254), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset batch_size, input_size = torch.flatten(images, start_dim=1).shape hidden_sizes = [1048, 960, 640, 320, 160, 80, 76, 25, 8] output_size = 2 print(batch_size, input_size) import torch.nn as nn import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(input_size, hidden_sizes[0]) self.fc2 = nn.Linear(hidden_sizes[0], hidden_sizes[1]) self.fc3 = nn.Linear(hidden_sizes[1], hidden_sizes[2]) self.fc4 = nn.Linear(hidden_sizes[2], hidden_sizes[3]) self.fc5 = nn.Linear(hidden_sizes[3], hidden_sizes[4]) self.fc6 = nn.Linear(hidden_sizes[4], hidden_sizes[5]) self.fc7 = nn.Linear(hidden_sizes[5], hidden_sizes[6]) self.fc8 = nn.Linear(hidden_sizes[6], hidden_sizes[7]) self.fc9 = nn.Linear(hidden_sizes[7], hidden_sizes[8]) self.fc10 = nn.Linear(hidden_sizes[8], output_size) def forward(self, x): x = torch.flatten(x, start_dim=1) x = F.dropout(F.relu(self.fc1(x)), 0.05) x = F.dropout(F.relu(self.fc2(x)), 0.1) x = F.dropout(F.relu(self.fc3(x)), 0.1) x = F.dropout(F.relu(self.fc4(x)), 0.2) x = F.dropout(F.relu(self.fc5(x)), 0.25) x = F.dropout(F.relu(self.fc6(x)), 0.2) x = F.dropout(F.relu(self.fc7(x)), 0.1) x = F.dropout(F.relu(self.fc8(x)), 0.1) x = F.dropout(F.relu(self.fc9(x)), 0.05) x = F.log_softmax(self.fc10(x), dim=1) return x import torch.optim as optim model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.01) epochs = 30 for e in range(epochs): for images, labels in trainloader: model.train() optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() else: with torch.no_grad(): for images, labels in testloader: model.eval() log_ps = model(images) ps = torch.exp(log_ps) _, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy = torch.mean(equals.type(torch.FloatTensor)) print(f"Accuracy: {accuracy.item() * 100}%") ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code import os base_path = '/Users/elkhand/datasets' data_dir = os.path.join(base_path,'Cat_Dog_data/train') transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])# TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform = transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)# TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code import os base_path = '/Users/elkhand/datasets' data_dir = os.path.join(base_path,'Cat_Dog_data') # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code import torch from torch import nn, optim import torch.nn.functional as F # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset class CatsAndDogsNetwork(nn.Module): def __init__(self, input_size, output_size, hidden_layers, drop_p=0.5): super().__init__() # Input to a hidden layer self.hidden_layers = nn.ModuleList([nn.Linear(input_size, hidden_layers[0])]) layer_sizes = zip(hidden_layers[:-1], hidden_layers[1:]) self.hidden_layers.extend([nn.Linear(h1, h2) for h1, h2 in layer_sizes]) self.output = nn.Linear(hidden_layers[-1], output_size) self.dropout = nn.Dropout(p=drop_p) def forward(self, x): #x = x.view(x.shape[0], 784) #print("x.shape: ", x.shape) for h in self.hidden_layers: x = self.dropout(F.relu(h(x))) x = self.output(x) return F.log_softmax(x, dim=1) def validation(model, testloader, criterion): test_loss, accuracy = 0, 0 for images, labels in testloader: images = images.resize_(images.size()[0], 784) log_ps = model(images) test_loss += criterion(log_ps, labels).item() ps = torch.exp(log_ps) top_p, top_classes = ps.topk(1, dim=1) equals = top_classes == labels.view(*top_classes.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) return test_loss, accuracy def train(model, trainloader, testloader, criterion, optimizer, epochs=5, print_every=40): for e in range(epochs): running_loss = 0 batch_count = 0 for images, labels in trainloader: batch_count += 1 optimizer.zero_grad() # Flatten images into a 784 long vector images.resize_(images.size()[0], 784) log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() if batch_count == print_every: model.eval() with torch.no_grad(): test_loss, accuracy = validation(model, testloader, criterion) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/print_every), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) model.train() model = CatsAndDogsNetwork(784, 10, [256,128,64]) criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) print(model) train(model, trainloader, testloader, criterion, optimizer) ###Output CatsAndDogsNetwork( (hidden_layers): ModuleList( (0): Linear(in_features=784, out_features=256, bias=True) (1): Linear(in_features=256, out_features=128, bias=True) (2): Linear(in_features=128, out_features=64, bias=True) ) (output): Linear(in_features=64, out_features=10, bias=True) (dropout): Dropout(p=0.5) ) Epoch: 1/5.. Training Loss: 0.189.. Test Loss: 137.806.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 64.157.. Test Loss: 67.706.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 2.671.. Test Loss: 0.701.. Test Accuracy: 0.506 Epoch: 4/5.. Training Loss: 76.381.. Test Loss: 3.271.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 10.209.. Test Loss: 16.040.. Test Accuracy: 0.494 ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim from collections import OrderedDict class Flatten(nn.Module): def forward(self, x): return x.view(x.size()[0], -1) class Classifier(nn.Module): """ [3@224x224] Input [6@56x56] CONV1 (10x10), stride 4, pad 9 [6@14x14] POOL1 (4x4) stride 4 [16@10x10] CONV2 (5x5), stride 1, pad 0 [16@5x5] POOL2 (2x2) stride 2 [120] FC [84] FC [2] Softmax """ def __init__(self): super().__init__() self.model = nn.Sequential(OrderedDict([ ('conv1', nn.Conv2d(in_channels=3, out_channels=6, kernel_size=(10, 10), stride=4, padding=9)), ('relu1', nn.ReLU()), ('pool1', nn.MaxPool2d(kernel_size=(4, 4), stride=4)), ('conv2', nn.Conv2d(6, 16, (5, 5), 1, 0)), ('relu2', nn.ReLU()), ('pool2', nn.MaxPool2d((2, 2), 2)), ('flatten', Flatten()), ('fc3', nn.Linear(in_features=16 * 5 * 5, out_features=120)), ('relu3', nn.ReLU()), ('dropout3', nn.Dropout(p=0.2)), ('fc4', nn.Linear(in_features=120, out_features=84)), ('relu4', nn.ReLU()), ('dropout4', nn.Dropout(p=0.2)), ('fc5', nn.Linear(84, 2)), ('prob', nn.LogSoftmax(dim=1))])) def forward(self, x): # make sure input tensor is flattened x = self.model(x) return x net = Classifier() torch.cuda.set_device(0) net.cuda() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: images, labels = images.cuda(), labels.cuda() optimizer.zero_grad() log_ps = net.forward(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss, accuracy = 0, 0 ## TODO: Implement the validation pass and print out the validation accuracy with torch.no_grad(): # Validation pass for images, labels in testloader: images, labels = images.cuda(), labels.cuda() log_ps = net.forward(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print('Epoch: {}/{}'.format(e+1, epochs)) print('Training loss: {:.3f}..'.format(running_loss/len(trainloader))) print('Test loss: {:.3f}..'.format(test_loss/len(testloader))) print('Test Accuracy: {:.3f}..'.format(accuracy/len(testloader))) %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt plt.plot(train_losses, label='Training loss') plt.plot(test_losses, label='Validation loss') plt.legend(frameon=False) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms from torch.utils.data import DataLoader, Dataset import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_dog_data/train' transform = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir,transform=transform) dataloader = DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5],[0.5, 0.5, 0.5]) ]) test_transforms = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = DataLoader(train_data, batch_size=32) testloader = DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(35), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # Define transforms for preprocessing transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Load and preprocess each image in set, pair each image with a label dataset = datasets.ImageFolder(data_dir, transform=transform) # Shuffle dataset. Create a generator that returns 32 pairs of (preprocessed_image, label) from the shuffled dataset. dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(45), transforms.RandomResizedCrop(112), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.RandomRotation(45), transforms.RandomResizedCrop(112), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # Show training examples data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) # Show test examples data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(244), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code # !git clone <url> # %cd deep-learning-v2-pytorch/intro-to-pytorch %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'dogs-vs-cats/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset,batch_size=32,shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.toTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '~/Programming Projects/deep-learning-v2-pytorch/Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '~/Programming Projects/deep-learning-v2-pytorch/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() flattened_size = (224**2)*3 # [224, 224] image with 3 color channels? self.fc1 = nn.Linear(flattened_size, int(flattened_size/2)) self.fc2 = nn.Linear(int(flattened_size/2), int(flattened_size/8)) self.fc3 = nn.Linear(int(flattened_size/8), int(flattened_size/16)) self.fc4 = nn.Linear(int(flattened_size/16), int(flattened_size/64)) self.fc5 = nn.Linear(int(flattened_size/64), int(flattened_size/(64*4))) # Output size of 2 because there are two possiblities (cat or dog) self.fc6 = nn.Linear(int(flattened_size/(64*4)), 2) self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = self.dropout(F.relu(self.fc4(x))) x = self.dropout(F.relu(self.fc5(x))) x = F.log_softmax(self.fc6(x)) return x model = Classifier() torch.cuda.is_available() ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.CenterCrop(255), transforms.ToTensor()]) dataset = datasets.ImageFolder('Cat_Dog_data', transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomHorizontalFlip(), transforms.RandomResizedCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code img = images[1].clone() img.shape img.view(3 * 224 * 224) img.view(img.shape[0], -1).shape from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(3 * 224 * 224, 224 * 224) self.fc2 = nn.Linear(224 * 224, 224) self.fc3 = nn.Linear(224, 112) self.fc4 = nn.Linear(112, 56) self.fc5 = nn.Linear(56, 28) self.fc6 = nn.Linear(28, 16) self.fc7 = nn.Linear(16, 8) self.fc8 = nn.Linear(8, 2) def forward(self, x): # make sure input tensor is flattened x = x.view(3 * 224 * 244) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.relu(self.fc4(x)) x = F.relu(self.fc5(x)) x = F.relu(self.fc6(x)) x = F.relu(self.fc7(x)) x = F.log_softmax(self.fc8(x), dim=1) return x model = Classifier() images, labels = next(iter(testloader)) # Get the class probabilities ps = torch.exp(model(images)) # Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples print(ps.shape) # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset for image, labels in train_loader: ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.RandomCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # transform = # TODO: compose transforms here # dataset = # TODO: create the ImageFolder # dataloader = # TODO: use the ImageFolder dataset to create the DataLoader transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomResizedCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.RandomResizedCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader # data_iter = iter(testloader) data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset # Take work from last solution, even though we don't expect very good results from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) print("x shape:", x.shape) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) # output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 4 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_losses[-1]), "Test Loss: {:.3f}.. ".format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt plt.plot(train_losses, label='Training loss') plt.plot(test_losses, label='Validation loss') plt.legend(frameon=False) # Import helper module (should be in the repo) import helper # Test out your network! model.eval() dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1D vector img = img.view(1, 784) # Calculate the class probabilities (softmax) for img with torch.no_grad(): output = model.forward(img) ps = torch.exp(output) # Plot the image and probabilities helper.view_classify(img.view(1, 28, 28), ps, version='Fashion') ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder('Cat_Dog_data/train', transform=transform)# TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, 32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) test_transforms = transforms.Compose([ transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader for loader in [trainloader, testloader]: data_iter = iter(loader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform )# TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)# TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code train_dir = '../data/cat_dog/train/' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(train_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True, num_workers=8) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) data = images.view(64,3,-1) data.mean(2).sum(0) # find appropriate normalization values mean=0 std=0 num=0 for images,_ in dataloader: bs,c,h,w = images.size() images = images.view(bs, c, -1) # collapse the h/w into pixel vector mean += images.mean(2).sum(0) # get mean of pixel vector for each batch and channel; sum the batches std += images.std(2).sum(0) num += 1 num*64 mean/(num*64), std/(num*64) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = '../data/cat_dog' means = [0.4877, 0.4502, 0.4112] stds = [0.2226, 0.2180, 0.2171] # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomResizedCrop(224), transforms.RandomRotation(30), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(means,stds) ]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(means,stds) ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code from torch import nn, optim import torch.nn.functional as F class ConvBlock(nn.Module): def __init__(self, c_in, c_out, k): super().__init__() self.conv = nn.Conv2d(c_in, c_out, k) # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 10, 5) self.bn = self.conv2 = nn.Conv2d ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ?transforms ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code # data_dir = 'Cat_Dog_data/train' data_dir = '/projects/trans_scratch/validations/workspace/szong/deep_learning/fastai/courses/dl1/data/dogscats/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(244), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code # data_dir = 'Cat_Dog_data' data_dir = '/projects/trans_scratch/validations/workspace/szong/deep_learning/fastai/courses/dl1/data/dogscats' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(244), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([.5,.5,.5], [.5,.5,.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(244), transforms.ToTensor(), transforms.Normalize([.5,.5,.5], [.5,.5,.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/valid', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) image, label = next(iter(testloader)) label # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose( [ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ] ) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose( [ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ] ) test_transforms = transforms.Compose( [ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ] ) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code images[ii].shape from torch import nn import torch.nn.functional as F from torch import optim # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(3*224*224, 256) self.fc2 = nn.Linear(256, 2) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) # output so no dropout here x = F.log_softmax(self.fc2(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 model.train () for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: model.eval() test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) ###Output Epoch: 1/30.. Training Loss: 5.799.. Test Loss: 24915.811.. Test Accuracy: 0.506 Epoch: 2/30.. Training Loss: 230.360.. Test Loss: 51103.766.. Test Accuracy: 0.506 Epoch: 3/30.. Training Loss: 214.579.. Test Loss: 38657.320.. Test Accuracy: 0.506 Epoch: 4/30.. Training Loss: 663.853.. Test Loss: 41670.688.. Test Accuracy: 0.506 Epoch: 5/30.. Training Loss: 1728.869.. Test Loss: 38854.535.. Test Accuracy: 0.506 Epoch: 6/30.. Training Loss: 1668.271.. Test Loss: 22242.861.. Test Accuracy: 0.506 Epoch: 7/30.. Training Loss: 865.079.. Test Loss: 16720.221.. Test Accuracy: 0.506 Epoch: 8/30.. Training Loss: 378.057.. Test Loss: 15846.117.. Test Accuracy: 0.506 Epoch: 9/30.. Training Loss: 53.649.. Test Loss: 9775.612.. Test Accuracy: 0.506 Epoch: 10/30.. Training Loss: 46.498.. Test Loss: 7916.501.. Test Accuracy: 0.506 ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms from torch import nn import torch.nn.functional as F import fc_model import helper from torch import optim ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transforms1 =transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(root=data_dir,transform=transforms1)# TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset=dataset,shuffle=True,batch_size=3)# TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomResizedCrop(224), transforms.RandomRotation(30), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5],[0.5,0.5,0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32,shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32,shuffle=True) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) print(images.shape) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset model=fc_model.Network(50176,2,[25088,12544,6272]) criterion=nn.NLLLoss() optimizer=optim.Adam(params=model.parameters(),lr=0.01) train(model,trainloader,testloader,criterion,optimizer,epochs=1) def validation(model, testloader, criterion): accuracy = 0 test_loss = 0 for images, labels in testloader: images = images.resize_(images.size()[0], 50176) output = model.forward(images) test_loss += criterion(output, labels).item() ## Calculating the accuracy # Model's output is log-softmax, take exponential to get the probabilities ps = torch.exp(output) # Class with highest probability is our predicted class, compare with true label equality = (labels.data == ps.max(1)[1]) # Accuracy is number of correct predictions divided by all predictions, just take the mean accuracy += equality.type_as(torch.FloatTensor()).mean() return test_loss, accuracy def train(model, trainloader, testloader, criterion, optimizer, epochs=5, print_every=40): steps = 0 running_loss = 0 for e in range(epochs): # Model in training mode, dropout is on model.train() for images, labels in trainloader: steps += 1 # Flatten images into a 784 long vector images.resize_(images.size()[0], 50176) optimizer.zero_grad() output = model.forward(images) loss = criterion(output, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: # Model in inference mode, dropout is off model.eval() # Turn off gradients for validation, will speed up inference with torch.no_grad(): test_loss, accuracy = validation(model, testloader, criterion) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/print_every), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) running_loss = 0 # Make sure dropout and grads are on for training model.train() ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor() ]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transformations = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transformations) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(10), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(28), transforms.CenterCrop(10), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=128) testloader = torch.utils.data.DataLoader(test_data, batch_size=128) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code from torch import nn, optim import torch.nn.functional as F # Attempt to build a network to classify cats vs dogs from this dataset class CatsDogsNet(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(300, 256) self.fc2 = nn.Linear(256, 64) self.fc3 = nn.Linear(64, 2) def forward(self, x): x = x.view(-1, 300) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.log_softmax(self.fc3(x), dim=1) return x model = CatsDogsNet() criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.003) epochs = 4 for epoch in range(epochs): running_loss = 0 testing_loss = 0 accuracy = 0 for i, (images, labels) in enumerate(trainloader): model.train() output = model.forward(images) loss = criterion(output, labels) running_loss += loss optimizer.zero_grad() loss.backward() optimizer.step() if i%9 == 0: print("Done samples: ", int(i*128), "/", len(trainloader)*128) for i, (images, labels) in enumerate(testloader): with torch.no_grad(): model.eval() output = model.forward(images) loss = criterion(output, labels) testing_loss += loss preds = torch.exp(output) top_preds, top_cls = preds.topk(1, dim=1) equals = top_cls == labels.view(*top_cls.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) if i%9 == 0: print("Done samples: ", int(i*128), "/", len(testloader)*128) print("Training_loss: {}\tTesting_loss: {}\tAccuracy: {}".format(running_loss/len(trainloader), testing_loss/len(testloader), accuracy/len(testloader))) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms #import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code from google.colab import drive drive.mount('/content/drive') data_dir = '/content/drive/MyDrive/Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir,transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) import matplotlib.pyplot as plt import numpy as np from torch import nn, optim from torch.autograd import Variable def test_network(net, trainloader): criterion = nn.MSELoss() optimizer = optim.Adam(net.parameters(), lr=0.001) dataiter = iter(trainloader) images, labels = dataiter.next() # Create Variables for the inputs and targets inputs = Variable(images) targets = Variable(images) # Clear the gradients from all Variables optimizer.zero_grad() # Forward pass, then backward pass, then update weights output = net.forward(inputs) loss = criterion(output, targets) loss.backward() optimizer.step() return True def imshow(image, ax=None, title=None, normalize=True): """Imshow for Tensor.""" if ax is None: fig, ax = plt.subplots() image = image.numpy().transpose((1, 2, 0)) if normalize: mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) image = std * image + mean image = np.clip(image, 0, 1) ax.imshow(image) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.tick_params(axis='both', length=0) ax.set_xticklabels('') ax.set_yticklabels('') return ax def view_recon(img, recon): ''' Function for displaying an image (as a PyTorch Tensor) and its reconstruction also a PyTorch Tensor ''' fig, axes = plt.subplots(ncols=2, sharex=True, sharey=True) axes[0].imshow(img.numpy().squeeze()) axes[1].imshow(recon.data.numpy().squeeze()) for ax in axes: ax.axis('off') ax.set_adjustable('box-forced') def view_classify(img, ps, version="MNIST"): ''' Function for viewing an image and it's predicted classes. ''' ps = ps.data.numpy().squeeze() fig, (ax1, ax2) = plt.subplots(figsize=(6,9), ncols=2) ax1.imshow(img.resize_(1, 28, 28).numpy().squeeze()) ax1.axis('off') ax2.barh(np.arange(10), ps) ax2.set_aspect(0.1) ax2.set_yticks(np.arange(10)) if version == "MNIST": ax2.set_yticklabels(np.arange(10)) elif version == "Fashion": ax2.set_yticklabels(['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot'], size='small'); ax2.set_title('Class Probability') ax2.set_xlim(0, 1.1) plt.tight_layout() # Run this to test your data loader images, labels = next(iter(dataloader)) imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = '/content/drive/MyDrive/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5])]) test_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(244), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.Grayscale(1), transforms.RandomRotation(30), transforms.RandomResizedCrop(244), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5])]) test_transforms = transforms.Compose([transforms.Grayscale(1), transforms.Resize(255), transforms.CenterCrop(244), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii],ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code from torch import nn n_input=images.view(images.shape[0], -1).size()[1] n_hidden1=int(n_input/10) n_hidden2=int(n_hidden1/4) n_hidden3=512 n_hidden5=32 output=2 model = nn.Sequential(nn.Linear(n_input,n_hidden1), nn.Dropout(p=0.2), nn.ReLU(), nn.Linear(n_hidden1,n_hidden2), nn.Dropout(p=0.2), nn.ReLU(), nn.Linear(n_hidden2,n_hidden3), nn.Dropout(p=0.2), nn.ReLU(), nn.Linear(n_hidden3,n_hidden4), nn.Dropout(p=0.2), nn.ReLU(), nn.Linear(n_hidden4,n_hidden5), nn.Dropout(p=0.2), nn.ReLU(), nn.Linear(n_hidden5,output), nn.LogSoftmax(dim=1) ) print(model) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.RandomRotation(80), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader =torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '../data/Cat_Dog_data' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir+'/train', transform=transform) dataloader = dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torch import nn, optim from torchvision import datasets, transforms import helper import fc_model ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = './Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) images.shape ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = './Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(512), transforms.CenterCrop(300), transforms.ToTensor()]) dataset = datasets.ImageFolder('Cat_Dog_data/train', transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(180), transforms.RandomVerticalFlip(), transforms.RandomHorizontalFlip(), transforms.RandomResizedCrop(300), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(512), transforms.CenterCrop(300), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms import helper import fc_model from workspace_utils import active_session model = fc_model.Network(784, 10, [512, 256, 128]) criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.0001) with active_session(): fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=300) checkpoint = {'input_size': 784, 'output_size': 10, 'hidden_layers': [each.out_features for each in model.hidden_layers], 'state_dict': model.state_dict()} torch.save(checkpoint, 'checkpoint_cat_dog.pth') ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/home/victor/data/.pytorch/Cat_Dog_data/train' data_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(254), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=data_transforms) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = '/home/victor/data/.pytorch/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([.5, .5, .5], [.5, .5, .5])]) test_transforms = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([.5, .5, .5], [.5, .5, .5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code !wget -c https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip !unzip -qq Cat_Dog_data.zip data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=64) testloader = torch.utils.data.DataLoader(test_data, batch_size=64) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii] / 2 + 0.5, ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(44944, 120) self.fc2 = nn.Linear(120, 84) self.output = nn.Linear(84, 2) self.dropout = nn.Dropout(p=0.2) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = torch.flatten(x, 1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = F.log_softmax(self.output(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 5 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model.to(device) for e in range(epochs): running_loss = 0 model.train() for images, labels in trainloader: images, labels = images.to(device), labels.to(device) log_ps = model(images) loss = criterion(log_ps, labels) optimizer.zero_grad() loss.backward() optimizer.step() running_loss += loss.item() else: tot_test_loss = 0 test_correct = 0 with torch.no_grad(): model.eval() for images, labels in testloader: images, labels = images.to(device), labels.to(device) log_ps = model(images) loss = criterion(log_ps, labels) tot_test_loss += loss.item() ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) test_correct += equals.sum().item() train_loss = running_loss / len(trainloader.dataset) test_loss = tot_test_loss / len(testloader.dataset) accuracy = test_correct / len(testloader.dataset) * 100 print(f'Epoch: {e+1}/{epochs}', f'Training Loss: {train_loss:.3f}', f'Test Loss: {test_loss:.3f}', f'Test Accuracy: {accuracy}%') def view_classify(img, ps): ''' Function for viewing an image and it's predicted classes. ''' ps = ps.data.numpy().squeeze() fig, (ax1, ax2) = plt.subplots(figsize=(6,9), ncols=2) ax1.imshow(img.numpy().squeeze().transpose((1, 2, 0))) ax1.axis('off') ax2.barh(np.arange(2), ps) ax2.set_aspect(0.1) ax2.set_yticks(np.arange(2)) ax2.set_yticklabels(['Dog', 'Cat']) ax2.set_title('Class Probability') ax2.set_xlim(0, 1.1) plt.tight_layout() images, labels = next(iter(testloader)) images, labels = images.to(device), labels.to(device) with torch.no_grad(): model.eval() output = model(images) index = 0 ps = torch.exp(output).cpu() view_classify((0.5 * images.cpu()[index] + 0.5), ps[index]) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) test_transforms = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])] ) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()] ) test_transforms = transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'C:\\Users\\willk\\Downloads\\Cat_Dog_data\\Cat_Dog_data\\train' train_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])# TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, train_transforms) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size = 32, shuffle = True, pin_memory = True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'C:\\Users\\willk\\Downloads\\Cat_Dog_data\\Cat_Dog_data\\' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '\\train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '\\test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, pin_memory = True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32, pin_memory = False) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim=1) return x net = Classifier() net ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.RandomAffine(5, translate=(0.05, 0.05), scale=(0.9, 1.1), shear=0.1), transforms.Resize(300), transforms.CenterCrop(300), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) print(labels[0]) ###Output tensor(0) ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(254), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(254), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'asset/Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(200), transforms.CenterCrop(180), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.data.Dataloader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.RandomRotation(60), transforms.RandomResizedCrop(10), transforms.RandomVerticalFlip(), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir,transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor() ]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder('C:/Users/iampu/Downloads/Cat_Dog_data/Cat_Dog_data/train', transform = transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size = 32, shuffle = True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) images.shape ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'C:/Users/iampu/Downloads/Cat_Dog_data/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'M:/Download/all' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform= transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size= 32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = r'M:/Download/all' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomResizedCrop(225), transforms.RandomRotation(30), transforms.RandomVerticalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train/', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test/', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code #data_dir = 'Cat_Dog_data/train' data_dir = r"C:\Users\fl_su\projects\data\Cat_Dog_data\Cat_Dog_data" transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(200), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder batch_size = 64 shuffle = True dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True, sampler=None, batch_sampler=None, num_workers=0, collate_fn=None, pin_memory=False, drop_last=False, timeout=0, worker_init_fn=None, multiprocessing_context=None, generator=None) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code #data_dir = 'Cat_Dog_data' data_dir = r"C:\Users\fl_su\projects\data\Cat_Dog_data\Cat_Dog_data" # TODO: Define transforms for the training data and testing data # Remember: # I ran into issues b/c each image was different w & h. # I used scalar argument to resize. This led to different images being transformed to different sizes. # The result was error when calling images, labels = next(data_iter) in the next cell of notebook # Below train_transforms & test_transforms code is from Part 8 Transfer Learning solution notebook. train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn import torch.nn.functional as F class CatDogModel(nn.Module): def __init__(self): super().__init__() n_inputs = 224*224*3 fc_divisors = [128, 256, 512] fc_sizes = [int(n_inputs/divisor) for divisor in fc_divisors] n_classes = 2 self.fc1 = nn.Linear(n_inputs, fc_sizes[0]) self.fc2 = nn.Linear(fc_sizes[0], fc_sizes[1]) self.fc3 = nn.Linear(fc_sizes[1], fc_sizes[2]) self.fc4 = nn.Linear(fc_sizes[2], n_classes) self.dropout1 = nn.Dropout(0.001) self.dropout2 = nn.Dropout(0.01) self.dropout3 = nn.Dropout(0.05) def forward(self, x): x = x.view(x.shape[0],-1) x = self.dropout1(F.relu(self.fc1(x))) x = self.dropout2(F.relu(self.fc2(x))) x = self.dropout3(F.relu(self.fc3(x))) x = F.log_softmax(self.fc4(x), dim=1) return x model = CatDogModel() from torch import optim learning_rate = 0.01 criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) epochs = 30 device="cuda" model.to(device) for epoch in range(epochs): train_loss = 0 for images,labels in trainloader: images, labels = images.to(device), labels.to(device) optimizer.zero_grad() log_ps = model(images) batch_train_loss = criterion(log_ps, labels) batch_train_loss.backward() optimizer.step() train_loss += batch_train_loss print(train_loss) model.cpu() import helper # Test out your network! model.cpu() model.eval() dataiter = iter(testloader) images, labels = dataiter.next() import matplotlib.pyplot as plt import numpy as np for i in range(3): plt.subplot(1,3,i+1) img = images[i].numpy().transpose((1, 2, 0)) mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) img = std * img + mean img = np.clip(img, 0, 1) plt.imshow(img) # Calculate the class probabilities (softmax) for img with torch.no_grad(): outputs = model.forward(images[:3,:]) ps = torch.exp(outputs) ps for images,labels in testloader: a=1 break labels ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'E:\code\PetImages/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'E:\code\PetImages' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) #print(images.shape) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor() test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform = transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data # Note: we apply random rotations etc ( which are called augmentations) # to training data only,because it helps us generate more data, # and also helps simulate more data points train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose(transforms=[ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(0.5), transforms.ToTensor() ]) test_transforms = transforms.Compose(transforms=[ transforms.CenterCrop(224), transforms.ToTensor() ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'data/' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.Resize(255), transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset import fc_model import torch.nn.functional as F from torch import nn, optim class Network(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528, 512) self.fc2 = nn.Linear(512, 256) self.fc3 = nn.Linear(256, 128) self.fc4 = nn.Linear(128, 64) self.fc5 = nn.Linear(64, 32) self.output = nn.Linear(32, 2) self.dropout = nn.Dropout(p=0.5) def forward(self, x): x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = self.dropout(F.relu(self.fc4(x))) x = self.dropout(F.relu(self.fc5(x))) x = F.log_softmax(self.output(x), dim=1) return x criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.003) model = Network() def train(model, trainloader, testloader, criterion, optimizer, epochs=5, print_every=40): steps = 0 running_loss = 0 model.cuda() for e in range(epochs): # Model in training mode, dropout is on model.train() for images, labels in trainloader: steps += 1 images = images.cuda() labels = labels.cuda() optimizer.zero_grad() output = model.forward(images) loss = criterion(output, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: # Model in inference mode, dropout is off model.eval() # Turn off gradients for validation, will speed up inference with torch.no_grad(): test_loss, accuracy = validation(model, testloader, criterion) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/print_every), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) running_loss = 0 # Make sure dropout and grads are on for training model.train() def validation(model, testloader, criterion): accuracy = 0 test_loss = 0 for images, labels in testloader: images = images.view(images.shape[0], -1) images = images.cuda() labels = labels.cuda() output = model.forward(images) test_loss += criterion(output, labels).item() ## Calculating the accuracy # Model's output is log-softmax, take exponential to get the probabilities ps = torch.exp(output) # Class with highest probability is our predicted class, compare with true label equality = (labels.data == ps.max(1)[1]) # Accuracy is number of correct predictions divided by all predictions, just take the mean accuracy += equality.type_as(torch.FloatTensor()).mean() return test_loss, accuracy train(model, trainloader, testloader, criterion, optimizer) ###Output Epoch: 1/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.644.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.644.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.647.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.666.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.744.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.743.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.745.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.743.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.744.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.742.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.743.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 1/5.. Training Loss: 0.745.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.705.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.706.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.743.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.744.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.746.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.745.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.743.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.744.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.744.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 2/5.. Training Loss: 0.745.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.664.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.648.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.644.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.745.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.745.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.744.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.746.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.743.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.743.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.745.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 3/5.. Training Loss: 0.745.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.725.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.647.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.647.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.687.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.745.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.744.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.744.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.744.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.745.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.746.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.745.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 4/5.. Training Loss: 0.744.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.685.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.644.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.646.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.645.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.726.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.743.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.744.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.745.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.743.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.743.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.744.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.743.. Test Loss: 0.695.. Test Accuracy: 0.494 Epoch: 5/5.. Training Loss: 0.743.. Test Loss: 0.695.. Test Accuracy: 0.494 ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '../Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) type(dataloader) len(dataloader) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '../Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset display(images.shape) labels.shape from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528, 16384) self.fc2 = nn.Linear(16384, 1024) self.fc3 = nn.Linear(1024, 128) self.fc4 = nn.Linear(128, 2) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) # output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 3 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_losses[-1]), "Test Loss: {:.3f}.. ".format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset,batch_size=32,shuffle=True)# TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code # # fix for MacOS # # from https://stackoverflow.com/questions/53014306/error-15-initializing-libiomp5-dylib-but-found-libiomp5-dylib-already-initial # import os os.environ['KMP_DUPLICATE_LIB_OK']='True' %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code !pwd data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255),transforms.CenterCrop(224),transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir,transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) print(labels[0].item()) helper.imshow(images[0], normalize=False) ###Output 1 ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) images.shape ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(3*224*224, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 2) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) # output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: print(f"Epoch {e} Running loss {running_loss/len(trainloader)}") train_losses.append(running_loss/len(trainloader)) ## TODO: Implement the validation pass and print out the validation accuracy accuracy = 0 test_loss = 0 model.eval() #enter inference mode wiout dropout for images, labels in testloader: with torch.no_grad(): log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p,top_class = ps.topk(1,dim=1) equal = (top_class == labels.view(*top_class.shape)) accuracy += (torch.mean(equal.type(torch.FloatTensor))) test_losses.append(test_loss/len(testloader)) model.train() # return to a training mode with dropout print("Epoch {}/{}".format(e+1,epochs), "Train loss {:.3f}".format(running_loss/len(trainloader)), "Test loss {:.3f}".format(test_loss/len(testloader)), "Accuracy {:.3f}".format(accuracy/len(testloader))) plt.plot(train_losses,label="Training loss") plt.plot(test_losses,label="Test loss") plt.legend(frameon=False) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])# TODO: compose transforms here dataset = datasets.ImageFolder('./Cat_Dog_data',transform=transform)# TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset,batch_size=32,shuffle=True)# TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor() ]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])# TODO: compose transforms here # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/data/Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224),transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '/data/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms =transforms.Compose([transforms.RandomResizedCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms from torch.utils.data import DataLoader import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([ 0.5, 0.5, 0.5 ], [ 0.5, 0.5, 0.5 ]) ]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([ 0.5, 0.5, 0.5 ], [ 0.5, 0.5, 0.5 ]) ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])# TODO: compose transforms here dataset = datasets.ImageFolder(data_dir,transform = transform)# TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size = 32, shuffle= True)# TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transfroms([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5])]) test_transforms = transforms([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(224), transforms.CenterCrop(224), transforms.ToTensor()])# TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform)# TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)# TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), #transforms.RandomHorizontalFlip(), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import optim from torch import nn import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) # output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_losses[-1]), "Test Loss: {:.3f}.. ".format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' try: import google.colab IN_COLAB = True except: IN_COLAB = False if IN_COLAB: !wget -nc -q https://raw.githubusercontent.com/joaopamaral/deep-learning-v2-pytorch/master/intro-to-pytorch/helper.py import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code if IN_COLAB: !wget -nc -q https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip !unzip -q Cat_Dog_data.zip !ls Cat_Dog_data data_dir = 'Cat_Dog_data/train' trf = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=trf) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader # if IN_COLAB: # !pip -q install Pillow==4.0.0 # !pip -q install image images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False); ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) print(images[0].view(1, -1).shape) ###Output torch.Size([1, 150528]) ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 2) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) # output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code # Loaf helper file link = "https://drive.google.com/file/d/1vfvwy8nG2xHeubJyNvNgYFhyxs6c903T/view?usp=sharing" _, id_t = link.split('d/') id = id_t.split('/')[0] print ("Loading file ...") print (id) # Verify that you have everything after '=' # Install the PyDrive wrapper & import libraries. # This only needs to be done once per notebook. !pip install -U -q PyDrive from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # Authenticate and create the PyDrive client. # This only needs to be done once per notebook. auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) file_id = id downloaded = drive.CreateFile({'id':file_id}) downloaded.FetchMetadata(fetch_all=True) downloaded.GetContentFile(downloaded.metadata['title']) print ("Completed") %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code # Load the data set link = "https://drive.google.com/file/d/1Cn0B9Zr2irUnZcHqODT9IilGHf9fZ61R/view?usp=sharing" _, id_t = link.split('d/') id = id_t.split('/')[0] print ("Loading file ...") print (id) # Verify that you have everything after '=' # Install the PyDrive wrapper & import libraries. # This only needs to be done once per notebook. !pip install -U -q PyDrive from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # Authenticate and create the PyDrive client. # This only needs to be done once per notebook. auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) file_id = id downloaded = drive.CreateFile({'id':file_id}) downloaded.FetchMetadata(fetch_all=True) downloaded.GetContentFile(downloaded.metadata['title']) print ("Completed") !ls ! unzip -qq Cat_Dog_data.zip !ls data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) plt.show() ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # Example of title on the raw subplot fig, big_axes = plt.subplots( figsize=(10, 4) , nrows=2, ncols=1) label_ = ["Training", "Testing"] for row, big_ax in enumerate(big_axes, start=0): big_ax.set_title("%s examples :\n" % label_[row], fontsize=16) # Turn off axis lines and ticks of the big subplot # obs alpha is 0 in RGBA string! big_ax.tick_params(labelcolor=(1.,1.,1., 0.0), top='off', bottom='off', left='off', right='off') # removes the white frame big_ax._frameon = False # Training data loader data_iter = iter(trainloader) images, labels = next(data_iter) for i in range(1,5): ax = fig.add_subplot(2,4,i) helper.imshow(images[i], ax=ax, normalize=False) # Testing data Loader data_dir = iter(testloader) images, labels = next(data_iter) for i in range(5,9): ax = fig.add_subplot(2,4,i) helper.imshow(images[i], ax=ax, normalize=False) # ax.set_title('Plot title ' + str(i)) fig.set_facecolor('w') plt.tight_layout() plt.show() # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: ## TODO: Implement the validation pass and print out the validation accuracy test_loss = 0 accuracy = 0 with torch.no_grad(): # validation pass here for images, labels in testloader: log_pro = model(images) test_loss +=criterion(log_pro, labels) pro = torch.exp(model(images)) _, y_pred = pro.topk(1, dim=1) equals = y_pred == labels.view(y_pred.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f} %".format(accuracy*100/len(testloader))) labels 224*224*3 a_1=images a_2=a_1.view(a_1.shape[0], -1) a_2.shape ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) print(images[ii].shape) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528, 18816) self.fc2 = nn.Linear(18816, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 2) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) # output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_losses[-1]), "Test Loss: {:.3f}.. ".format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) #Trainloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) plt.title('Training Examples:') for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=True) # Testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) plt.title('Testing Examples:') for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=True) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F ## TODO: Define your model with dropout added class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) # output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x ## TODO: Train your model with dropout, and monitor the training progress with the validation loss and accuracy model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 10 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 test_loss = 0 accuracy = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: ## TODO: Implement the validation pass and print out the validation accuracy with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) # set model back to train mode model.train() %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt plt.plot( train_losses, label = 'Training Loss') plt.plot( test_losses, label = 'Testing Loss') ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])# TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform)# TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)# TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(225), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=8) testloader = torch.utils.data.DataLoader(test_data, batch_size=8) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) print(images.shape, labels.shape) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output torch.Size([8, 3, 224, 224]) torch.Size([8]) ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim from torch.nn import functional as F class CatsAndDogs(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(3*224*224, 112*112) self.fc2 = nn.Linear(112*112, 28*28) self.fc3 = nn.Linear(28*28, 7*7) self.fc4 = nn.Linear(7*7, 2) self.dropout = nn.Dropout(p=0.2) def forward(self, x): x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = F.log_softmax(self.fc4(x), dim=1) return x model = CatsAndDogs() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) epoch = 1 steps = 0 traning_losses, test_losses = [], [] for e in range(epoch): print("Start Epoch {}/{}, ".format(e+1, epoch)) running_loss = 0 for images, labels in trainloader: print("Start trainloader {}, ".format(len(trainloader))) optimizer.zero_grad() logps = model(images) loss = criterion(logps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 with torch.no_grad(): model.eval() for images, labels in testloader: print("Start testloader {}, ".format(len(testloader))) log_ps = model(images) test_loss += criterion(log_ps, labels) top_p, top_class = torch.exp(log_ps).topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() traning_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("End Epoch {}/{}, ".format(e+1, epoch), "Training Loss: {:.3f}, ".format(traning_losses[-1]), "Test Loss: {:.3f}, ".format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) ###Output Start Epoch 1/1, Start trainloader 2813, ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms from torch.utils.data import DataLoader import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '~/Datasets/Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir,transform=transform) dataloader = DataLoader(dataset,batch_size=32,num_workers=4,shuffle=True,pin_memory=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255),transforms.CenterCrop(224),transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir,transform=transform) dataloader = torch.utils.data.DataLoader(dataset,batch_size=32,shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.Resize(255),transforms.CenterCrop(224),transforms.RandomRotation(45),transforms.RandomHorizontalFlip(),transforms.RandomVerticalFlip(),transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))]) test_transforms = transforms.Compose([transforms.Resize(255),transforms.CenterCrop(224),transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '../datasets/Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '../datasets/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.Resize(72), transforms.CenterCrop(64), transforms.RandomHorizontalFlip(), transforms.Grayscale(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(72), transforms.CenterCrop(64), transforms.Grayscale(), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32, shuffle=True) def imshow(image, ax=None, title=None, normalize=True): """Imshow for Tensor.""" if ax is None: fig, ax = plt.subplots() image = image.numpy().transpose((1, 2, 0)) if normalize: mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) image = std * image + mean image = np.clip(image, 0, 1) image = image.reshape((64, 64)) ax.imshow(image) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.tick_params(axis='both', length=0) ax.set_xticklabels('') ax.set_yticklabels('') return ax # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) # print(images[0, :].shape) # helper.imshow(images[0, :]) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] imshow(images[ii, :], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn from torch import optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(4096, 512) self.fc2 = nn.Linear(512, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 2) self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) model.cuda() epochs = 3 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): print("Epoch {}".format(e+1)) running_loss_training = 0 running_loss_test = 0 for images, labels in trainloader: images = images.cuda() labels = labels.cuda() optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss_training += loss.item() else: with torch.no_grad(): model.eval() all_results = [] for images, labels in testloader: images = images.cuda() labels = labels.cuda() log_ps = model(images) loss = criterion(log_ps, labels) running_loss_test += loss.item() ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.reshape((*top_class.shape)) all_results.append(equals) all_results = torch.cat(all_results) accuracy = torch.mean(all_results.type(torch.FloatTensor)) print(f'Accuracy: {accuracy.item()*100}%') model.train() print("Training loss: {}".format(running_loss_training / len(trainloader))) print("Test loss: {}".format(running_loss_test / len(testloader))) train_losses.append(running_loss_training / len(trainloader)) test_losses.append(running_loss_test / len(testloader)) plt.plot(train_losses, label="Training loss") plt.plot(test_losses, label="Test loss") plt.legend(frameon=False) # Import helper module (should be in the repo) import helper # Test out your network! model.eval() dataiter = iter(testloader) images, labels = dataiter.next() images, labels = images.cuda(), labels.cuda() img = images[0] print(labels[0]) # Convert 2D image to 1D vector img = img.view(1, 4096) # Calculate the class probabilities (softmax) for img with torch.no_grad(): output = model.forward(img) ps = torch.exp(output) img = img.cpu() ps = ps.cpu() # Plot the image and probabilities import numpy as np ps = ps.data.numpy().squeeze() fig, (ax1, ax2) = plt.subplots(figsize=(6,9), ncols=2) ax1.imshow(img.resize_(1, 64, 64).numpy().squeeze()) ax1.axis('off') ax2.barh(np.arange(2), ps) ax2.set_aspect(0.1) ax2.set_yticks(np.arange(2)) ax2.set_yticklabels(np.arange(2)) ax2.set_title('Class Probability') ax2.set_xlim(0, 1.1) plt.tight_layout() model.train() ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) print(images.shape) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output torch.Size([32, 3, 224, 224]) ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 2) self.dropout = nn.Dropout(p=0.2) def forward(self, x): x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 with torch.no_grad(): for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_losses[-1]), "Test Loss: {:.3f}.. ".format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt plt.plot(train_losses, label='Training loss') plt.plot(test_losses, label='Validation loss') plt.legend(frameon=False) # Import helper module (should be in the repo) import helper # Test out your network! model.eval() dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1D vector img = img.view(1, 784) # Calculate the class probabilities (softmax) for img with torch.no_grad(): output = model.forward(img) ps = torch.exp(output) # Plot the image and probabilities helper.view_classify(img.view(1, 28, 28), ps, version='Fashion') ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.RandomResizedCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'dogs-vs-cats\\train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'dogs-vs-cats' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) #transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]))]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/home/gabriel/Documents/Pytorch/Cat_Dog_data/train' # TODO: compose transforms here transform = transforms.Compose([transforms.Resize(784), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = '/home/gabriel/Documents/Pytorch/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(60), transforms.RandomResizedCrop(284), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255),transforms.CenterCrop(224),transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir,transform=transform) dataloader = torch.utils.data.DataLoader(dataset,batch_size=32,shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30),transforms.RandomSizedCrop(224),transforms.RandomHorizontalFlip(), transforms.ToTensor(),transforms.Normalize([0.5,0.5,0.5],[0.5,0.5,0.5])]) test_transforms = transforms.Compose([transforms.Resize(255),transforms.CenterCrop(224),transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) images, labels = next(iter(testloader)) fig,axes = plt.subplots(figsize=(10,4),ncols=4) for i in range(4): ax = axes[i] helper.imshow(images[i],ax=ax,normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/home/akshat/Documents/Projects/DeepLearning/Datasets/dogs-vs-cats/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '/home/akshat/Documents/Projects/DeepLearning/Datasets/dogs-vs-cats' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test1', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) dataset = datasets.ImageFolder('dogs-vs-cats/train', transform= transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'dogs-vs-cats' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(24), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) test_transforms = transforms.Compose([transforms.RandomResizedCrop(24), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test1', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn import torch.nn.functional as F class CatDog(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(1728, 224) self.fc2 = nn.Linear(224, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 1) self.dropout = nn.Dropout(p=0.4) def forward(self, x): x = x.flatten(start_dim=1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = torch.sigmoid(self.fc4(x)) return x device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device) model = CatDog() model.to(device) criterion = nn.BCELoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.03) epochs = 5 train_accuracy, test_accuracy = [], [] for e in range(epochs): train_acc, test_acc = 0, 0 for images, labels in trainloader: labels = labels.type(torch.FloatTensor) images, labels = images.to(device), labels.to(device) optimizer.zero_grad() preds = model(images) # print(preds.type(), labels.type()) loss = criterion(preds, labels.unsqueeze(1)) loss.backward() optimizer.step() pred_class = torch.FloatTensor([0 if x < 0.5 else 1 for x in preds]).to(device) acc = pred_class == labels.unsqueeze(1) # print(acc) train_acc += torch.mean(acc.type(torch.FloatTensor)) train_acc = train_acc.numpy()/len(trainloader) train_accuracy.append(train_acc) for images, labels in testloader: labels = labels.type(torch.FloatTensor) images, labels = images.to(device), labels.to(device) model.eval() with torch.no_grad(): preds = model(images) loss = criterion(preds, labels.unsqueeze(1)) model.train() pred_class = torch.FloatTensor([0 if x < 0.5 else 1 for x in preds]).to(device) acc = pred_class == labels.unsqueeze(1) test_acc += torch.mean(acc.type(torch.FloatTensor)) test_acc = test_acc.numpy()/len(testloader) test_accuracy.append(test_acc) print('Epoch: {}, Train Accuracy: {:.2%}, Test Accuracy: {:.2%}'.format(e, train_acc, test_acc)) # Import helper module (should be in the repo) import helper # Test out your network! model.to('cpu') model.eval() dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1Dpu vector img = img.flatten().unsqueeze(0) # Calculate the class probabilities (softmax) for img with torch.no_grad(): output = model.forward(img) # ps = torch.exp(output) # # Plot the image and probabilities # helper.view_classify(img.view(1, 28, 28), ps, version='Fashion') helper.imshow(images[0], normalize=False) output ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.Grayscale(num_output_channels=1), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.Grayscale(num_output_channels=1), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32, shuffle=True) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn, optim import torch.nn.functional as F class Network(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(50176, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = F.tanh(self.fc1(x)) x = F.tanh(self.fc2(x)) x = F.tanh(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim=1) return x model = Network() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.1) epochs = 5 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(train_losses[-1]), "Test Loss: {:.3f}.. ".format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt plt.plot(train_losses, label='Training loss') plt.plot(test_losses, label='Validation loss') plt.legend(frameon=False) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) dataset = datasets.ImageFolder(data_dir, transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder("Cat_dog_data/train", transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size = 32, shuffle = True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle = True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn class Network(nn.Module): def __init__(self): super().__init__() self.fc0 = nn.Linear(150528,1024) self.fc1 = nn.Linear(1024,512) self.fc2 = nn.Linear(512,256) self.fc3 = nn.Linear(256,128) self.fc4 = nn.Linear(128,64) self.fc5 = nn.Linear(64,2) self.dropout = nn.Dropout(p=0.4) def forward(self, x): x = x.view(x.shape[0],-1) ##forgor again x = self.dropout(F.relu(self.fc0(x))) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = self.dropout(F.relu(self.fc4(x))) x = F.log_softmax(self.fc5(x), dim=1) return x torch.cuda.is_available() from torch.nn import functional as F device = torch.device("cuda") model = Network() model.to(device) import sys print('__Python VERSION:', sys.version) print('__pyTorch VERSION:', torch.__version__) print('__CUDA VERSION', ) from subprocess import call # call(["nvcc", "--version"]) does not work ! nvcc --version print('__CUDNN VERSION:', torch.backends.cudnn.version()) print('__Number CUDA Devices:', torch.cuda.device_count()) print('__Devices') # call(["nvidia-smi", "--format=csv", "--query-gpu=index,name,driver_version,memory.total,memory.used,memory.free"]) print('Active CUDA Device: GPU', torch.cuda.current_device()) print ('Available devices ', torch.cuda.device_count()) print ('Current cuda device ', torch.cuda.current_device()) # from torch.nn import functional as F # device = torch.device("cuda") # model = Network() # model.to(device) criterion = nn.NLLLoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.01) epoches = 30 train_losses, test_losses = [], [] for e in range(epoches): model.train() running_loss = 0 for data_train in trainloader: # images, labels = data_train[0], data_train[1] images, labels = data_train[0].to(device), data_train[1].to(device) optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) optimizer.step() running_loss += loss.item() else: accuracy = 0 test_loss = 0 with torch.no_grad(): model.eval() for data_test in testloader: images, labels = data_test[0].to(device), data_test[0].to(device) log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) _, pred = ps.topk(k=1, dim=1) equals = pred == labels.view(*pred.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss.item()/len(testloader)) print("Epoch: {}/{}.. ".format(e+1,epochs), "Train loss: {:.4f}.. ".format(running_loss/len(trainloader)), "Test loss: {:.4f}.. ".format(test_loss.item()/len(testloader)), "Test Accuracy: {:.4f}.. ".format(accuracy/len(testloader))) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder('/home/xiao/Downloads/Cat_Dog_data/train', transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import os import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code # Setting up the paths base_data_dir = os.path.abspath('../_data') cat_dog_data_dir = os.path.join(base_data_dir, 'Cat_Dog_data') data_train_dir = os.path.join(cat_dog_data_dir, 'train') data_test_dir = os.path.join(cat_dog_data_dir, 'test') bs = 32 # TODO: compose transforms here tfms = transforms.Compose([transforms.Resize(size=225), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: create the ImageFolder dataset = datasets.ImageFolder(root=data_train_dir, transform=tfms) # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=bs,shuffle=True) print(f'Number of samples: {len(dataset.samples)}') print(f'Number of classes: {len(dataset.classes)}') print(f'Number of batches: {len(dataloader)}') # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code # TODO: Define transforms for the training data and testing data train_tfms = transforms.Compose([transforms.RandomRotation(degrees=45), transforms.RandomResizedCrop(size=224), transforms.RandomHorizontalFlip(), transforms.RandomVerticalFlip(), transforms.ToTensor()]) test_tfms = transforms.Compose([transforms.Resize(size=225), transforms.CenterCrop(size=224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_train_dir, transform=train_tfms) test_data = datasets.ImageFolder(data_test_dir, transform=test_tfms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir ='Cat_Dog_data/1/' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_train = 'Cat_Dog_data/1/' data_test = 'Cat_Dog_data/2/' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_train, transform=train_transforms) test_data = datasets.ImageFolder(data_test, transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code # import torch.utils.data.DataLoader as DataLoader data_dir = 'Cat_Dog_data/train' # transform = # TODO: compose transforms here transform = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) # dataset = # TODO: create the ImageFolder dataset = datasets.ImageFolder(data_dir,transform=transform) # dataloader = # TODO: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset,batch_size=32,shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize( [0.5, 0.5, 0.5], [0.5, 0.5, 0.5] ) ]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize( [0.5, 0.5, 0.5], [0.5, 0.5, 0.5] ) ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] print(images[ii].shape) helper.imshow(images[ii], ax=ax, normalize=False) next(data_iter)[0].shape ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(3*224*224, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) self.dropout = nn.Dropout(p=0.25) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in iter(trainloader): model.train() optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ## TODO: Implement the validation pass and print out the validation accuracy # get class probabilities ps = torch.exp(log_ps) # top probabilities and top class indices # use only highest probability class top_p, top_class = ps.topk(1, dim=1) # find where predicted labels match ground truth equals = top_class == labels.view(*top_class.shape) # calculate accuracy by averaging equals: # equals must be cast to an integer (0|1 False|True) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print('Epoch: {}/{}..\n'.format(e+1,epochs), 'Training Loss: {:.3f}\n'.format(train_losses[-1]), 'Test Loss: {:.3f}\n'.format(test_losses[-1]), 'Test Accuracy: {:.2f}%'.format(accuracy/len(testloader)*100) ) ###Output Epoch: 1/30.. Training Loss: 3.042 Test Loss: 2473.539 Test Accuracy: 50.55% Epoch: 2/30.. Training Loss: 44.786 Test Loss: 1578.759 Test Accuracy: 50.55% ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has its own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = "assets/Cat_Dog_data/train" transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'assets/Cat_Dog_data' # Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomSizedCrop(224), transforms.RandomRotation(30), transforms.RandomVerticalFlip(), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.RandomSizedCrop(224), transforms.RandomRotation(30), transforms.RandomVerticalFlip(), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import os import sys import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code transform = transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.RandomRotation(45), transforms.RandomHorizontalFlip(), transforms.ColorJitter(), transforms.Normalize([0.5, 0.5, 0.5], [0.5 , 0.5, 0.5]) ) transforms.ToTensor(),]) trainset = datasets.ImageFolder('~/Downloads/Cat_Dog_Data/train', train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) testset = datasets.ImageFolder('~/Downloads/Cat_Dog_Data/test', train=False, transform=transform) testloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '~/Downloads/Cat_Dog_Data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.RandomRotation(45), transforms.RandomHorizontalFlip(), transforms.ColorJitter(), transforms.ToTensor() ]) test_transforms = train_transforms # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset model = nn.Sequential(nn.Linear(784, 256), nn.ReLU(), nn.Linear(256, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 10), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.005) epochs = 3 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: images = images.view(images.shape[0], -1) optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: ## TODO: Implement the validation pass and print out the validation accuracy test_loss = 0 accuracy = 0 with torch.no_grad(): for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print(f'Train loss: {running_loss/len(trainloader)}') print(f'Test loss: {test_loss/len(testloader)}') print(f'Accuracy: {accuracy/len(testloader)}') ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' # DONE: compose transforms here transform = transforms.Compose([transforms.RandomResizedCrop(224,scale=(0.5,1.0),ratio=(1.,1.)), transforms.ToTensor()]) # DONE: create the ImageFolder dataset = datasets.ImageFolder(data_dir, transform=transform) # DONE: use the ImageFolder dataset to create the DataLoader dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True, num_workers=8) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # DONE: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomHorizontalFlip(p=0.3), transforms.RandomVerticalFlip(p=0.3), transforms.RandomRotation(45), transforms.RandomResizedCrop(224,scale=(0.5,1.0),ratio=(1.,1.)), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.RandomResizedCrop(224,scale=(0.5,1.0),ratio=(1.,1.)), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True, num_workers=8) testloader = torch.utils.data.DataLoader(test_data, batch_size=32, shuffle=True, num_workers=8) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])# TODO: compose transforms here dataset = datasets.ImageFolder('Cat_Dog_data', transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code # http://pytorch.org/ from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision import torch %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms # upload external file before import from google.colab import files files.upload() import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code from google.colab import drive drive.mount('/content/drive/') !ls "/content/drive/My Drive/" data_dir = '/content/drive/My Drive/Colab Notebooks/Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = '/content/drive/My Drive/Colab Notebooks/Cat_Dog_data' # Define transforms from training and testing data train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor() ]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) # Create Image Folder train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) # create the DataLoader trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '../../../Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '../../../Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset from torch import nn,optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528,256) self.fc2 = nn.Linear(256,128) self.fc3 = nn.Linear(128,2) self.dropout = nn.Dropout(p=0.2) def forward(self, x): x = x.view(x.shape[0],-1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = F.log_softmax(self.fc3(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.03) epochs = 1 train_losses, test_losses = [],[] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 with torch.no_grad(): model.eval() for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1,dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1,epochs), "Training Loss: {:.3f}.. ".format(train_losses[-1]), "Test Loss: {:.3f}.. ".format(test_losses[-1]), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) ###Output Epoch: 1/1.. Training Loss: 6437.269.. Test Loss: 1379039.125.. Test Accuracy: 0.506 ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' %config IPCompleter.greedy=True import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # TODO: compose transforms here dataset = datasets.ImageFolder(data_dir, transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(255), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0,0,0],[1,1,1])]) # TODO: compose transforms here dataset = datasets.ImageFolder('C:/Users/AYUSH/Desktop/AYUSH COURSE/PyTorch Udacity/Cat_Dog_data/train', transform=transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=32,shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transform test_transforms = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0,0,0],[1,1,1])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder('C:/Users/AYUSH/Desktop/AYUSH COURSE/PyTorch Udacity/Cat_Dog_data/train', transform=train_transforms) test_data = datasets.ImageFolder('C:/Users/AYUSH/Desktop/AYUSH COURSE/PyTorch Udacity/Cat_Dog_data/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '~/data/Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False); ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = '~/data/Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.Grayscale(num_output_channels=1), transforms.RandomRotation(30), transforms.Resize((28, 28)), transforms.RandomResizedCrop(28), transforms.RandomHorizontalFlip(), transforms.ToTensor(), ]) test_transforms = transforms.Compose([transforms.Grayscale(num_output_channels=1), transforms.Resize((28, 28)), transforms.ToTensor(), ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader import numpy as np data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] print (ax) print (np.squeeze(images[ii]).shape) helper.imshow(np.squeeze(images[ii]), ax=ax, normalize=False) # change this to the trainloader or testloader import numpy as np data_iter = iter(testloader) images, labels = next(data_iter) # fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): # ax = axes[ii] # print (ax) # print (np.squeeze(images[ii]).shape) helper.imshow(np.squeeze(images[ii])) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): for images, labels in testloader: log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) print(labels[0]) ###Output tensor(0) ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transforms)```where `'path/to/data'` is the file path to the data directory and `transforms` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '../../Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) print(labels[0]) ###Output tensor(0) ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. ###Code data_dir = '../../Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(trainloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = # TODO: compose transforms here dataset = # TODO: create the ImageFolder dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = '/Users/shivendra/Downloads/dogs-vs-cats/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code # !wget https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip # !unzip Cat_Dog_data.zip data_dir = './Cat_Dog_data' # defines a pipeline of image transformations transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir + '/train', transform) # TODO: create the ImageFolder dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True) # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) plt.imshow(images[0].permute(1,2,0)) plt.show() # helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = './Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), # transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) test_transforms = transforms.Compose([transforms.RandomResizedCrop(224), transforms.ToTensor(), #transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=128) testloader = torch.utils.data.DataLoader(test_data, batch_size=64) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) print(images.shape) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for i, ax in enumerate(axes.flatten()): ax.imshow(images[i].permute(1,2,0)) # helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code from torch import nn import torch.nn.functional as F from torch import optim # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset class Network(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(150528, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 10) self.dropout = nn.Dropout(0.2) def forward(self, x): x = x.view(x.shape[0], -1) x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = F.log_softmax(self.fc3(x), dim=1) return x model = Network() model criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=3e-4) model.cuda() from google.colab import drive drive.mount('/content/drive') train_losses, test_losses = [], [] epochs = 10 for e in range(epochs): running_loss = 0 for images, labels in trainloader: logprobs = model(images) # zero the grad optimizer.zero_grad() loss = criterion(logprobs, labels) #perform a backprop step loss.backward() # perform a SGD step optimizer.step() running_loss += loss.item() with torch.no_grad(): model.eval() # set the model o test mode accuracy = 0 test_loss = 0 for test_images, test_labels in testloader: logprobs = model(test_images) test_loss += criterion(logprobs, test_labels) probs = torch.exp(logprobs) _, preds = torch.topk(probs, k=1, dim=-1) accuracy += torch.mean((preds == test_labels.view(*preds.shape)).type(torch.FloatTensor)) test_losses.append(test_loss/len(testloader)) train_losses.append(running_loss / len(trainloader)) print("Epoch {}/{}".format(e+1, epochs), "Train loss: {}".format(running_loss / len(trainloader)), "Test loss: {}".format(test_loss / len(testloader)), "Test accuracy: {}".format(accuracy / len(testloader)) ) model.train() # reset the model to train mode ###Output _____no_output_____ ###Markdown Loading Image DataSo far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks.We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images:We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ###Output _____no_output_____ ###Markdown The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.htmlimagefolder)). In general you'll use `ImageFolder` like so:```pythondataset = datasets.ImageFolder('path/to/data', transform=transform)```where `'path/to/data'` is the file path to the data directory and `transform` is a list of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so:```root/dog/xxx.pngroot/dog/xxy.pngroot/dog/xxz.pngroot/cat/123.pngroot/cat/nsdf3.pngroot/cat/asd932_.png```where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. TransformsWhen you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor:```pythontransform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()])```There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). Data LoadersWith the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch.```pythondataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)```Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`.```python Looping through it, get a batch on each loop for images, labels in dataloader: pass Get one batchimages, labels = next(iter(dataloader))``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ###Code data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) #dataloader = # TODO: use the ImageFolder dataset to create the DataLoader # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ###Output _____no_output_____ ###Markdown If you loaded the data correctly, you should see something like this (your image will be different): Data AugmentationA common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc.To randomly rotate, scale and crop, then flip your images you would define your transforms like this:```pythontrain_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])```You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so```input[channel] = (input[channel] - mean[channel]) / std[channel]```Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network work weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn.You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered (except you'll need to normalize the same way). So, for validation/test images, you'll typically just resize and crop.>**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ###Output _____no_output_____ ###Markdown Your transformed images should look something like this.Training examples:Testing examples: At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny).In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ###Code # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ###Output _____no_output_____
Linear_Regression/Linear_Regression.ipynb
###Markdown Simple Linear Regression Model In this notebook we will use data on house sales in King County to predict house prices using simple (one input) linear regression. * Use graphlab SArray and SFrame functions to compute important summary statistics * Write a function to compute the Simple Linear Regression weights using the closed form solution * Write a function to make predictions of the output given the input feature * Turn the regression around to predict the input given the output * Compare two different models for predicting house prices To install turicreate or graphlab ###Code !pip install turicreate ###Output _____no_output_____ ###Markdown To import turicreate ###Code import turicreate as tc from turicreate import SFrame from google.colab import files ###Output _____no_output_____ ###Markdown Uploading files and unzipping ###Code uploaded = files.upload() !unzip home_data.sframe.zip ###Output Archive: home_data.sframe.zip replace home_data.sframe/m_1ce96d9d245ca490.0000? [y]es, [n]o, [A]ll, [N]one, [r]ename: n replace __MACOSX/home_data.sframe/._m_1ce96d9d245ca490.0000? [y]es, [n]o, [A]ll, [N]one, [r]ename: N ###Markdown Load house sales data ###Code sales = tc.SFrame('home_data.sframe') ###Output _____no_output_____ ###Markdown Split data into training and testing ###Code train_data,test_data = sales.random_split(.8,seed=0) sales # Let's compute the mean of the House Prices in King County in 2 different ways. prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray # recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses: sum_prices = prices.sum() num_houses = prices.nnz() avg_price_1 = sum_prices/num_houses avg_price_2 = prices.mean() print ("average price via method 1: " + str(avg_price_1)) print ("average price via method 2: " + str(avg_price_2)) half_prices = 0.5*prices # Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with * prices_squared = prices*prices sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up. print ("the sum of price squared is: " + str(sum_prices_squared)) print (sales['price'].mean()) ###Output 540088.1419053351 ###Markdown Build a generic simple linear regression function ###Code def simple_linear_regression(input_feature, output): Xi = input_feature Yi = output N = len(Xi) # compute the mean of input_feature and output Ymean = Yi.mean() Xmean = Xi.mean() # compute the product of the output and the input_feature and its mean SumYiXi = (Yi * Xi).sum() YiXiByN = (Yi.sum() * Xi.sum()) / N # compute the squared value of the input_feature and its mean XiSq = (Xi * Xi).sum() XiXiByN = (Xi.sum() * Xi.sum()) / N # use the formula for the slope slope = (SumYiXi - YiXiByN) / (XiSq - XiXiByN) # use the formula for the intercept intercept = Ymean - (slope * Xmean) return (intercept, slope) ###Output _____no_output_____ ###Markdown We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line: output = 1 + 1*input_feature then we know both our slope and intercept should be 1 ###Code test_feature = tc.SArray(range(5)) test_output = tc.SArray(1 + 1*test_feature) (test_intercept, test_slope) = simple_linear_regression(test_feature, test_output) print ("Intercept: " + str(test_intercept)) print ("Slope: " + str(test_slope)) ###Output Intercept: 1.0000000000000002 Slope: 1.0 ###Markdown Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data! ###Code sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price']) print ("Intercept: " + str(sqft_intercept)) print ("Slope: " + str(sqft_slope)) ###Output Intercept: -47116.076574940584 Slope: 281.9588385676974 ###Markdown Predicting Values ###Code def get_regression_predictions(input_feature, intercept, slope): predicted_values = intercept + (slope * input_feature) return predicted_values ###Output _____no_output_____ ###Markdown Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above. ###Code my_house_sqft = 2650 estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope) print ("The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)) ###Output The estimated price for a house with 2650 squarefeet is $700074.85 ###Markdown Residual Sum of Squares ###Code def get_residual_sum_of_squares(input_feature, output, intercept, slope): predicted_values = intercept + (slope * input_feature) residuals = output - predicted_values RSS = (residuals * residuals).sum() return(RSS) print (get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope)) # should be 0.0 rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope) print ('The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)) ###Output The RSS of predicting Prices based on Square Feet is : 1201918356321966.2 ###Markdown Predict the squarefeet given price What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x). ###Code def inverse_regression_predictions(output, intercept, slope): # solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions: estimated_feature = (output - intercept)/slope return estimated_feature my_house_price = 800000 estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope) print ("The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)) ###Output The estimated squarefeet for a house worth $800000.00 is 3004 ###Markdown New Model: estimate prices from bedrooms We have made one model for predicting house prices using squarefeet, but there are many other features in the sales SFrame. Use your simple linear regression function to estimate the regression parameters from predicting Prices based on number of bedrooms. Use the training data! ###Code # Estimate the slope and intercept for predicting 'price' based on 'bedrooms' sqft_intercept, sqft_slope = simple_linear_regression(train_data['bedrooms'], train_data['price']) print ("Intercept: " + str(sqft_intercept)) print ("Slope: " + str(sqft_slope)) ###Output Intercept: 109473.1804692861 Slope: 127588.95217458377 ###Markdown Test your Linear Regression Algorithm Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet. ###Code # Compute RSS when using bedrooms on TEST data: sqft_intercept, sqft_slope = simple_linear_regression(train_data['bedrooms'], train_data['price']) rss_prices_on_bedrooms = get_residual_sum_of_squares(test_data['bedrooms'], test_data['price'], sqft_intercept, sqft_slope) print ('The RSS of predicting Prices based on Bedrooms is : ' + str(rss_prices_on_bedrooms)) # Compute RSS when using squarfeet on TEST data: sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price']) rss_prices_on_sqft = get_residual_sum_of_squares(test_data['sqft_living'], test_data['price'], sqft_intercept, sqft_slope) print ('The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)) print ("The lowest RSS on TEST data: " + str(min(rss_prices_on_bedrooms,rss_prices_on_sqft) )) ###Output The lowest RSS on TEST data: 275402936247141.53 ###Markdown Reading the Data ###Code urlTrain = "HousingData_LinearRegression.csv" df = pd.read_csv(urlTrain) df.head() ###Output _____no_output_____ ###Markdown Use standard deviation for normalization ###Code dfNorm=pd.DataFrame(((df-df.mean())/df.std())) #df.apply(lambda x: (x - DataFrame.mean(x)) / (DataFrame.std(x))) dfNorm.insert(0, 'Ones', 1) dfNorm.head() ###Output _____no_output_____ ###Markdown 1. After you've added the intercept term, define X as the features in the dataframe. Define Y as the target variable.2. Convert them to a numpy array and define beta(the coeffecients) with zeros. ###Code column = dfNorm.shape[1] X = dfNorm.iloc[:,0:column-1] Y = dfNorm.iloc[:,column-1:column] #Y = dfNorm['Price(USD)'] #X = dfNorm.drop('Price(USD)', axis = 1) print(X.head()) print(Y.head()) XMatrix = np.matrix(X.values) YMatrix = np.matrix(Y.values) m, n = np.shape(XMatrix) print(m , n) ###Output 47 3 ###Markdown Initilize the beta values to zeroes ###Code beta = np.zeros(n) temp = np.matrix(np.zeros(beta.shape)) print(temp) print(beta[0]) #parameters = int(beta.ravel().shape[1]) #print parameters ###Output [[0. 0. 0.]] 0.0 ###Markdown Since we now have every module to calculate our cost function, we'll go ahead and define it. ###Code def costFunction(X, Y, beta): ''' Compute the Least Square Cost Function. Return the calculated cost function. ''' m, n = np.shape(X) #print np.power(((X.dot(beta)) - Y), 2) #cost=np.sum(np.square(X.dot(beta)-Y) ) / (2 * m) cost=np.sum(np.square(np.dot(X, beta)-Y.T))/ (2 * m) #print cost return cost ###Output _____no_output_____ ###Markdown Define a Gradient Descent method that will update beta in every iteration and also update the cost. ###Code def gradientDescent(X, Y, beta, alpha, iters): ''' Compute the gradient descent function. Return beta and the cost array. ''' cost = np.zeros(iters) m, n = np.shape(X) i=0 print("first " ,beta) for i in range(iters): Err=np.dot(X, beta)-Y.T #print beta j=0 for j in range(n): tempBeta=beta[j] tempBeta=tempBeta-((alpha / m) * np.sum(np.dot(Err, X[:,j]))) beta[j]=tempBeta #print 1-((alpha / m) * np.sum(np.multiply(Err, X[:,j]))) #print beta cost[i]=costFunction(X,Y,beta) #print beta #print beta[0], i return beta, cost ###Output _____no_output_____ ###Markdown Define alpha and number of iterations of your choice and use them to call to gradientDescent function. ###Code beta = np.zeros(n) #please try different values to see the results, but alpha=0.01 and iters=1000 are suggested. alpha = 0.01 iters = 1000 result = gradientDescent(XMatrix, YMatrix, beta, alpha, iters) ###Output first [0. 0. 0.] ###Markdown Implement the Ridge Regression regularization and report the change in coeffecients of the parameters. ###Code fig, ax = plt.subplots(figsize=(12,8)) ax.plot(np.arange(iters), result[1], 'r',label = 'Unregulated') ax.set_xlabel('Iterations') ax.set_ylabel('Cost') ax.set_title('Error vs. Training Epoch') print(beta) print(costFunction(XMatrix,YMatrix,beta)) plt.show() def costFunctionRidge(X, Y, beta,ridgeLambda): ''' Compute the Least Square Cost Function. Return the calculated cost function. ''' m, n = np.shape(X) #print (X.dot(beta)-Y) ** 2 #print np.square(X.dot(ridgeLambda)-Y).shape #print (beta*ridgeLambda**2).shape costRidge=(np.sum(np.square(X.dot(beta)-Y.T)) ) / (2 * m) #+np.sum(beta*ridgeLambda**2) return costRidge def gradientDescentRidge(X, Y, beta, alpha, itersreg, ridgeLambda): ''' Compute the gradient descent function. Return beta and the cost array. ''' costRidge = np.zeros(iters) m, n = np.shape(X) i=0 #print "first " ,beta for i in range(iters): Err=np.dot(X, beta)-Y.T #print Err.shape j=0 for j in range(n): tempBeta=beta[j] tempBeta=tempBeta-((alpha / m) * np.sum(np.dot(Err, X[:,j])))+ beta[j]*ridgeLambda/m beta[j]=tempBeta costRidge[i]=costFunction(X,Y,beta) return beta, costRidge beta = np.zeros(n) alphareg = 0.01 itersreg = 1000 ridgeLambda=0.05 regResult = gradientDescentRidge(XMatrix, YMatrix, beta, alphareg, itersreg,ridgeLambda) ###Output _____no_output_____ ###Markdown Define alpha, number of iterations and lambda of your choice that minimizes the cost function and use them to call to gradientDescent function. Plot the cost graph with iterations titled "Error vs training" with and without regularization(y axis labeled as cost and x axix labeled as iterations). Then, calculate the MSE. ###Code fig, ax = plt.subplots(figsize=(12,8)) ax.plot(np.arange(iters), result[1], 'b') ax.plot(np.arange(iters), regResult[1], 'r') ax.set_xlabel('Iterations') ax.set_ylabel('Cost') ax.set_title('Error vs. Training Epoch. Unregulated(Blue) vs Regulated(Red)') plt.show() print(regResult[0]) print("Final Beta :",beta) print("MSE : " , 2*costFunction(XMatrix,YMatrix,beta)) ###Output Final Beta : [-1.11017964e-16 1.04556024e+00 -1.51690372e-01] MSE : 0.2788132786032512 ###Markdown 线性回归原理及实践线性回归通过一个线性模型来适配观测数据,这个线性模型是在特征和响应之间构建一个关系。目的是预测当前被观察的对象的值。线性回归的实现过程主要包括建立线性模型和选择优化方法求解参数两部分。 1. 建立线性模型想要一个成功的回归分析,在建立线性模型之前,确认以下信息很重要: **线性:**特征值与和预测值是线性相关 **不含多重共线性:**数据有极少或没有多重共线性,当特征不是相互独立时,会引发多重共线性。 **多元正态分布:**多元回归残差符合正态分布。 **虚拟变量:** 当遇到数据是非数值数据类型时,使用分类数据是一个非常有效的方法。分据数据,是指反映事物类别的数据,是离散数据,其数值个数有限且值之间无序。比如,按性别分为男,女两类。在一个回归模中,这些分类值可以用虚拟变量来表示,变量通常取如1或0这样的值,来表示肯定或否定类型。 **虚拟变量陷进:**虚拟变量陷进是指两个及以上变量之间高度相关的情形。简而言之,就是存在一个能够被其他变量预测出的变量,举一个存在重复类别的直观例子:对于男性类别,该类别也可以通过女性类别来定义,女性值为0时,表示男性,值为1时表示女性,反之亦然。解决虚拟变量陷进的方法是,类别变量数减去1,假如有m个类别,那么在模型构建时取(m-1)个虚拟变量,减去的那个变量可以看作是参考值。 给定训练集: $X_{train} = (x^{(1)},x^{(2)},x^{(3)},...,x^{(i)})$,对于单个输入$x^{(i)}=(x_{1}^{(i)},x_{2}^{(i)},...,x_{n}^{(i)})$, 可得到线性模型为:$$\hat{y}^{(i)} = w^T x^{(i)} + b = w_{1}x_{1}^{(i)}+w_{2}x_{2}^{(i)}+...+w_{n-1}x_{n-1}^{(i)}+w_{n}x_{n}^{(i)}+b\tag{1}$$对应的损失函数$ \mathcal{L}(\hat{y}^{(i)}, y^{(i)}) $为:$$ \mathcal{L}(\hat{y}^{(i)}, y^{(i)}) = \frac{1}{2} (\hat{y}^{(i)}-y^{(i)})^{2}\tag{2} $$然后通过对所有训练样例求和来计算代价函数:$$ J = \frac{1}{2m} \sum_{i=1}^m \mathcal{L}(\hat{y}^{(i)}, y^{(i)})\tag{3}$$ 2. 选择优化方法计算出代价函数后,需要选择优化方法来最小化代价函数,以得到合适的参数w和b。线性回归常用的优化方法为梯度下降法和最小二乘法。 2.1 梯度下降法 梯度下降法的过程为:首先执行前向传播和反向传播,然后根据反向传播得到的各个参数的偏导数,进行参数的更新。 **前向传播** 对于输入$X$,线性回归的预测值为:$$\hat{Y} = w^T X + b = (\hat{y}^{(1)}, \hat{y}^{(2)}, ..., \hat{y}^{(m-1)}, \hat{y}^{(m)})\tag{4}$$通过已知的训练数据与得到的预测值,可得到代价函数:$$ J = \frac{1}{2m} \sum_{i=1}^m (\hat{y}^{(i)}-y^{(i)})^{2}\tag{5}$$**反向传播**$$ dW = \frac{\partial J}{\partial W} = \frac{1}{m}X(\hat{Y}-Y)^T\tag{6}$$$$ db = \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (\hat{y}^{(i)}-y^{(i)})\tag{7}$$**更新参数**$$ w = w - \alpha*dW\tag{8}$$$$ b = b - \alpha*db\tag{9}$$其中,$\alpha$为学习速率。 2.2 最小二乘法最小二乘法是一种数学优化技术。它通过最小化误差的平方和寻找数据的最佳函数匹配。利用最小二乘法可以简便地求得未知的数据,并使得这些求得的数据与实际数据之间误差的平方和为最小。对于输入$X$,$\hat{Y} = w^T X + b$可转换为:$$ W = \begin{bmatrix}w \\b \end{bmatrix}, \ X = \begin{bmatrix}X \\1 \end{bmatrix} \tag{10}$$得到转换后的模型为:$$\hat{Y} = W^T X \tag{11}$$对应的损失函数:$$ J = \frac{1}{2m} \sum_{i=1}^m (\hat{y}^{(i)}-y^{(i)})^{2} = \frac{1}{2m} (\hat{Y}-Y)^{T}(\hat{Y}-Y) \tag{12}$$求出$dW$,并令$dW=0$,得到:$$ dW = \frac{\partial J}{\partial W} = \frac{1}{m}X(\hat{Y}-Y)^T = \frac{1}{m}(XX^{T}W - XY) = 0 \tag{13}$$求解得:$$ W = (XX^{T})^{-1}XY \tag{14}$$由公式(14)可知,线性回归可用最小二乘法求解参数的条件是$(XX^{T})$可逆,即矩阵$X$满秩。 学习目标- 构建学习算法的通用框架,主要包括: - 数据预处理 - 初始化参数 - 计算代价函数及其梯度 - 使用优化算法(最小二乘法,梯度下降法)- 构建简单线性回归模型分析数据- 构建多元线性回归模型分析数据 构建简单线性回归模型分析数据 导入库 ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.cross_validation import train_test_split %matplotlib inline ###Output E:\ruanjian\lib\site-packages\sklearn\cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20. "This module will be removed in 0.20.", DeprecationWarning) ###Markdown 导入数据集**数据集介绍:** 该数据集共25个数据项,特征为Hours(时长),要预测的值为Scores(分数)。 **查看数据集前5行** ###Code dataset = pd.read_csv('datasets/studentscores.csv') dataset.head() def load_dataset(): data = np.loadtxt("datasets/studentscores.csv", dtype=np.str, delimiter=",") X_train = data[1:,:1].astype(np.float) y_train = data[1:,-1].astype(np.float) return X_train, y_train X_train, y_train= load_dataset() ###Output _____no_output_____ ###Markdown 拆分数据集为训练集和测试集 ###Code X_train, X_test, y_train, y_test = train_test_split( X_train, y_train, test_size = 1/4, random_state = 0) ###Output _____no_output_____ ###Markdown 数据集矢量化 ###Code X_train, y_train = X_train.T.reshape(1,-1), y_train.T.reshape(1,-1) X_test, y_test = X_test.T.reshape(1,-1), y_test.T.reshape(1,-1) ###Output _____no_output_____ ###Markdown 1. 梯度下降法 参数初始化 ###Code def initialize_with_zeros(dim): """ 此函数为w创建一个形状为(dim,1)的零向量,并将b初始化为0。 输入: dim -- w向量的大小 输出: w -- 初始化的向量 b -- 初始化的偏差 """ w = np.zeros((dim,1)) b = 0 assert(w.shape == (dim, 1)) assert(isinstance(b, float) or isinstance(b, int)) return w, b ###Output _____no_output_____ ###Markdown 计算代价函数及其梯度 ###Code def propagate(w, b, X, Y): """ 实现前向传播的代价函数及反向传播的梯度 输入: w -- 权重, 一个numpy数组,大小为(特征数, 1) b -- 偏差, 一个标量 X -- 训练数据,大小为 (特征数 , 样本数量) Y -- 真实"标签"向量,大小为(1, 样本数量) 输出: cost -- 线性回归的代价函数 dw -- 相对于w的损失梯度,因此与w的形状相同 db -- 相对于b的损失梯度,因此与b的形状相同 """ m = X.shape[1] # 前向传播 Y_hat = np.dot(w.T,X)+b cost = np.dot((Y_hat - Y),(Y_hat - Y).T)/(2*m) # 反向传播 dw = np.dot(X,(Y_hat-Y).T)/m db = np.sum(Y_hat-Y)/m assert(dw.shape == w.shape) assert(db.dtype == float) cost = np.squeeze(cost) assert(cost.shape == ()) grads = {"dw": dw, "db": db} return grads, cost ###Output _____no_output_____ ###Markdown 梯度下降法优化参数 ###Code def optimize(w, b, X, Y, num_iterations, learning_rate): """ 此函数通过运行梯度下降算法来优化w和b 输入: w -- 权重, 一个numpy数组,大小为(特征数, 1) b -- 偏差, 一个标量 X -- 训练数据,大小为 (特征数 , 样本数量) Y -- 真实"标签"向量,大小为(1, 样本数量) num_iterations -- 优化循环的迭代次数 learning_rate -- 梯度下降更新规则的学习率 print_cost -- 是否每200步打印一次成本 输出: params -- 存储权重w和偏见b的字典 grads -- 存储权重梯度相对于代价函数偏导数的字典 costs -- 在优化期间计算的所有损失的列表,这将用于绘制学习曲线。 """ costs = [] for i in range(num_iterations): # 成本和梯度计算 grads, cost = propagate(w, b, X, Y) dw = grads["dw"] db = grads["db"] # 更新参数 w = w - learning_rate * dw b = b - learning_rate * db # 记录成本 if i % 200 == 0: costs.append(cost) # 每200次训练迭代打印成本 if i % 200 == 0: print ("Cost after iteration %i: %f" %(i, cost)) params = {"w": w, "b": b} grads = {"dw": dw, "db": db} return params, grads, costs ###Output _____no_output_____ ###Markdown 2. 最小二乘法 ###Code def least_squares(X, Y): ''' 最小二乘法求解参数w,b 输入: X -- 训练数据,大小为 (特征数 , 样本数量) Y -- 真实值向量,大小为(1, 样本数量) 输出: w -- 权重, 一个numpy数组,大小为(特征数, 1) b -- 偏差, 一个标量 ''' X = np.concatenate((X,np.ones((1,X.shape[1]))),axis=0) W = np.dot(np.linalg.inv(np.dot(X,X.T)),np.dot(X,Y.T)) w = W[:-1] b = W[-1] return w, b ###Output _____no_output_____ ###Markdown 定义预测函数 ###Code def predict(w, b, X): ''' 使用线性回归参数(w,b)预测结果 输入: w -- 权重, 一个numpy数组,大小为(特征数, 1) b -- 偏差, 一个标量 X -- 训练数据,大小为 (特征数 , 样本数量) 输出: Y_prediction -- 包含X中示例的所有预测(0/1)的numpy数组(向量) ''' m = X.shape[1] Y_prediction = np.zeros((1,m)) w = w.reshape(X.shape[0], 1) Y_prediction = np.dot(w.T,X)+b assert(Y_prediction.shape == (1, m)) return Y_prediction ###Output _____no_output_____ ###Markdown 构建线性回归模型 ###Code def model(X_train, Y_train, X_test, Y_test, optimization = "gradient descent",num_iterations = 2000, learning_rate = 0.5): """ 通过调用前面实现的函数来构建线性回归模型 输入: X_train -- 由numpy数组表示的训练集,大小为 (特征数,训练样本数) Y_train -- 由numpy数组(向量)表示的训练标签,大小为 (1, 训练样本数) X_test -- 由numpy数组表示的测试集,大小为(特征数,测试样本数) Y_test -- 由numpy数组(向量)表示的测试标签,大小为 (1, 测试样本数) optimization -- 选择优化方法,设为"gradient descent"时为梯度下降法,设为"least squares"时为最小二乘法。 num_iterations -- 超参数,表示优化参数的迭代次数 learning_rate -- 超参数,在优化算法更新规则中使用的学习率 输出: d -- 包含模型信息的字典。 """ if optimization == "gradient descent": # 初始化参数 w, b = initialize_with_zeros(X_train.shape[0]) # 梯度下降 parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate) # 从字典“parameters”中检索参数w和b w = parameters["w"] b = parameters["b"] elif optimization == "least squares": w, b = least_squares(X_train, Y_train) else: print("TypeError: model() got an unexpected keyword argument 'optimize'") # 预测测试/训练集 Y_prediction_test = predict(w, b, X_test) Y_prediction_train = predict(w, b, X_train) # 打印测试集的预测结果 print("Test data predict value : {}".format(Y_prediction_test)) print("The test data true value: {}".format(Y_test)) d = {"Y_prediction_test": Y_prediction_test, "Y_prediction_train" : Y_prediction_train, "w" : w, "b" : b, "learning_rate" : learning_rate, "num_iterations": num_iterations} return d ###Output _____no_output_____ ###Markdown 模型训练与测试 ###Code d_simple = model(X_train, y_train, X_test, y_test, optimization = "least squares") ###Output Test data predict value : [[16.84472176 33.74557494 75.50062397 26.7864001 60.58810646 39.71058194 20.8213931 ]] The test data true value: [[20. 27. 69. 30. 62. 35. 24.]] ###Markdown 训练集结果可视化 ###Code plt.scatter(np.squeeze(X_train), np.squeeze(y_train), color = 'red') plt.plot(np.squeeze(X_train), np.squeeze(d_simple["Y_prediction_train"]), color ='blue') plt.show() ###Output _____no_output_____ ###Markdown 测试集结果可视化 ###Code plt.scatter(np.squeeze(X_test), np.squeeze(y_test), color = 'red') plt.plot(np.squeeze(X_test), np.squeeze(d_simple["Y_prediction_test"]), color ='blue') plt.show() ###Output _____no_output_____ ###Markdown 构建多元线性回归模型分析数据 导入数据集**数据集介绍:** 该数据集共50个数据项,特征分别为:R&D Spend(研发花费),Administration(管理经费),Marketing Spend(市场花费),state(州)。要预测的内容为Profit(盈利)。**查看数据集前5行** ###Code dataset = pd.read_csv('datasets/50_Startups.csv') dataset.head() def load_dataset(): data = np.loadtxt("datasets/50_Startups.csv", dtype=np.str, delimiter=",") X_train = data[1:,:3].astype(np.float) X_dummy = data[1:,3] y_train = data[1:,-1].astype(np.float) return X_train, X_dummy, y_train train_X, X_dummy, train_y = load_dataset() ###Output _____no_output_____ ###Markdown 使用分类数据方法处理虚拟变量 ###Code def dummy_variable(X): ''' 输入: X -- 虚拟变量 输出: set_dummy -- 使用分类数据方法处理虚拟变量后的数组 ''' num_dummy = len(set(X)) set_dummy = np.zeros((X.shape[0],num_dummy)) for i in range(num_dummy): set_dummy[:,i][np.where(X==list(set(X))[i])] = 1. return set_dummy set_dummy = dummy_variable(X_dummy) train_set_x = np.concatenate((train_X,set_dummy),axis = 1) ###Output _____no_output_____ ###Markdown 躲避虚拟变量陷阱 ###Code train_set_x = train_set_x[:,:-1] ###Output _____no_output_____ ###Markdown 数据归一化处理 ###Code def normalization(X): ''' 输入: X -- 训练数据,大小为(特征数, 样本数量) 输出: X -- 归一化后的训练数据,大小为(特征数, 样本数量) x_max -- 原训练数据中每类特征的最大值 x_min -- 原训练数据中每类特征的最小值 ''' x_max = np.max(X,axis=0,keepdims=True) x_min = np.min(X,axis=0,keepdims=True) X = (X - x_min)/(x_max - x_min) return X,x_max,x_min train_set_x,x_max,x_min = normalization(train_set_x) ###Output _____no_output_____ ###Markdown 拆分数据集为训练集和测试集 ###Code train_set_x, test_set_x, train_set_y, test_set_y = train_test_split(train_set_x, train_y, test_size = 0.2, random_state = 0) ###Output _____no_output_____ ###Markdown 将数据集转换为矢量 ###Code train_set_x, train_set_y = train_set_x.T, train_set_y.T.reshape(1,-1) test_set_x, test_set_y = test_set_x.T, test_set_y.T.reshape(1,-1) ###Output _____no_output_____ ###Markdown 模型训练与测试 使用最小二乘法 ###Code d_multiple1 = model(train_set_x, train_set_y, test_set_x, test_set_y, optimization = "least squares") ###Output Test data predict value : [[103015.20159796 132582.27760816 132447.73845174 71976.09851258 178537.48221055 116161.24230165 67851.69209676 98791.73374687 113969.43533012 167921.0656955 ]] The test data true value: [[103282.38 144259.4 146121.95 77798.83 191050.39 105008.31 81229.06 97483.56 110352.25 166187.94]] ###Markdown 使用梯度下降法 ###Code d_multiple2 = model(train_set_x, train_set_y, test_set_x, test_set_y, optimization = "gradient descent", num_iterations = 3000, learning_rate = 0.5) ###Output Cost after iteration 0: 6807997862.101883 Cost after iteration 200: 43371803.699051 Cost after iteration 400: 40947393.054603 Cost after iteration 600: 40795651.773923 Cost after iteration 800: 40786137.392499 Cost after iteration 1000: 40785540.810488 Cost after iteration 1200: 40785503.402879 Cost after iteration 1400: 40785501.057301 Cost after iteration 1600: 40785500.910226 Cost after iteration 1800: 40785500.901004 Cost after iteration 2000: 40785500.900426 Cost after iteration 2200: 40785500.900389 Cost after iteration 2400: 40785500.900387 Cost after iteration 2600: 40785500.900387 Cost after iteration 2800: 40785500.900387 Test data predict value : [[103015.20160276 132582.27759601 132447.7384391 71976.09850775 178537.48219813 116161.24231955 67851.69209773 98791.73374919 113969.43533513 167921.06567985]] The test data true value: [[103282.38 144259.4 146121.95 77798.83 191050.39 105008.31 81229.06 97483.56 110352.25 166187.94]]
notebooks/Slice_rendering.ipynb
###Markdown Clara Viz Interactive Slice renderingThis notebook shows how to load a volume dataset using the DataDefinition class append method. The append method uses ITK to load the dataset from disk.The rendering settings are loaded from a JSON file.Then the Clara Viz widget is used to display an interactive view of the data. Define the dataFirst the data to be rendered needs to be defined. Clara Viz provides a support class called `DataDefinition` which supports loading medical data formats and serves as a container for the data including orientation and for the settings like lights and transfer functions. ###Code # The DataDefinition class is using ITK to load the data files, make sure ITK is available !python3 -c "import itk" || python3 -m pip install itk from clara.viz.core import DataDefinition data_definition = DataDefinition() data_definition.append('data/syn3193805/img0066.nii.gz', 'DXYZ') data_definition.append('data/syn3193805/label0066.nii.gz', 'MXYZ') data_definition.load_settings('data/syn3193805/settings.json') ###Output _____no_output_____ ###Markdown Create a widget and select the data definition, then display the widget* press and hold left mouse button and move mouse to change slice* press and hold middle mouse button and move mouse to move around* mouse wheel to zoom in and out ###Code from clara.viz.widgets import Widget # switch to slice view, default is cinematic rendering data_definition.settings['Views'][0]['cameraName'] = 'Top' data_definition.settings['Views'][0]['mode'] = 'SLICE_SEGMENTATION' display(Widget(data_definition=data_definition)) ###Output _____no_output_____ ###Markdown Clara Viz Interactive Slice renderingThis notebook shows how to load a volume dataset using the DataDefinition class append method. The append method uses ITK to load the dataset from disk.The rendering settings are loaded from a JSON file.Then the Clara Viz widget is used to display an interactive view of the data. Define the dataFirst the data to be rendered needs to be defined. Clara Viz provides a support class called `DataDefinition` which supports loading medical data formats and serves as a container for the data including orientation and for the settings like lights and transfer functions. ###Code # The DataDefinition class is using ITK to load the data files, make sure ITK is available !python3 -c "import itk" || python3 -m pip install itk from clara.viz.core import DataDefinition data_definition = DataDefinition() data_definition.append('data/syn3193805/img0066.nii.gz', 'DXYZ') data_definition.append('data/syn3193805/label0066.nii.gz', 'MXYZ') data_definition.load_settings('data/syn3193805/settings.json') ###Output _____no_output_____ ###Markdown Create a widget and select the data definition, then display the widget* press and hold left mouse button and move mouse to change slice* press and hold middle mouse button and move mouse to move around* mouse wheel to zoom in and out ###Code from clara.viz.widgets import Widget from ipywidgets import interactive, Dropdown, Box, VBox # switch to slice view, default is cinematic rendering data_definition.settings['Views'][0]['cameraName'] = 'Top' data_definition.settings['Views'][0]['mode'] = 'SLICE_SEGMENTATION' # create the widget widget = Widget(data_definition=data_definition) # dropdown callback function def set_camera(camera_name): widget.settings['Views'][0]['cameraName'] = camera_name widget.set_settings() # create a dropdown to select the view and display it alognside to the widget camera_dropdown = interactive(set_camera, camera_name=Dropdown(options=['Top', 'Front', 'Right'], value=widget.settings['Views'][0]['cameraName'], description='View')) display(Box([widget, camera_dropdown])) ###Output _____no_output_____
Lab 1 - Problem 1.ipynb
###Markdown Lab 1: Markov Decision Processes - Problem 1 Lab InstructionsAll your answers should be written in this notebook. You shouldn't need to write or modify any other files.**You should execute every block of code to not miss any dependency.***This project was developed by Peter Chen, Rocky Duan, Pieter Abbeel for the Berkeley Deep RL Bootcamp, August 2017. Bootcamp website with slides and lecture videos: https://sites.google.com/view/deep-rl-bootcamp/. It is adapted from Berkeley Deep RL Class [HW2](https://github.com/berkeleydeeprlcourse/homework/blob/c1027d83cd542e67ebed982d44666e0d22a00141/hw2/HW2.ipynb) [(license)](https://github.com/berkeleydeeprlcourse/homework/blob/master/LICENSE)*-------------------------- IntroductionThis assignment will review the two classic methods for solving Markov Decision Processes (MDPs) with finite state and action spaces.We will implement value iteration (VI) and policy iteration (PI) for a finite MDP, both of which find the optimal policy in a finite number of iterations.The experiments here will use the Frozen Lake environment, a simple gridworld MDP that is taken from `gym` and slightly modified for this assignment. In this MDP, the agent must navigate from the start state to the goal state on a 4x4 grid, with stochastic transitions. ###Code from misc import FrozenLakeEnv, make_grader env = FrozenLakeEnv() print(env.__doc__) ###Output Winter is here. You and your friends were tossing around a frisbee at the park when you made a wild throw that left the frisbee out in the middle of the lake. The water is mostly frozen, but there are a few holes where the ice has melted. If you step into one of those holes, you'll fall into the freezing water. At this time, there's an international frisbee shortage, so it's absolutely imperative that you navigate across the lake and retrieve the disc. However, the ice is slippery, so you won't always move in the direction you intend. The surface is described using a grid like the following SFFF FHFH FFFH HFFG S : starting point, safe F : frozen surface, safe H : hole, fall to your doom G : goal, where the frisbee is located The episode ends when you reach the goal or fall in a hole. You receive a reward of 1 if you reach the goal, and zero otherwise. ###Markdown Let's look at what a random episode looks like. ###Code # Some basic imports and setup import numpy as np, numpy.random as nr, gym import matplotlib.pyplot as plt %matplotlib inline np.set_printoptions(precision=3) # Seed RNGs so you get the same printouts as me env.seed(0); from gym.spaces import prng; prng.seed(10) # Generate the episode env.reset() for t in range(100): env.render() a = env.action_space.sample() ob, rew, done, _ = env.step(a) if done: break assert done env.render(); ###Output _____no_output_____ ###Markdown In the episode above, the agent falls into a hole after two timesteps. Also note the stochasticity--on the first step, the DOWN action is selected, but the agent moves to the right.We extract the relevant information from the gym Env into the MDP class below.The `env` object won't be used any further, we'll just use the `mdp` object. ###Code class MDP(object): def __init__(self, P, nS, nA, desc=None): self.P = P # state transition and reward probabilities, explained below self.nS = nS # number of states self.nA = nA # number of actions self.desc = desc # 2D array specifying what each grid cell means (used for plotting) mdp = MDP( {s : {a : [tup[:3] for tup in tups] for (a, tups) in a2d.items()} for (s, a2d) in env.P.items()}, env.nS, env.nA, env.desc) print("mdp.P is a two-level dict where the first key is the state and the second key is the action.") print("The 2D grid cells are associated with indices [0, 1, 2, ..., 15] from left to right and top to down, as in") print(np.arange(16).reshape(4,4)) print("Action indices [0, 1, 2, 3] correspond to West, South, East and North.") print("mdp.P[state][action] is a list of tuples (probability, nextstate, reward).\n") print("For example, state 0 is the initial state, and the transition information for s=0, a=0 is \nP[0][0] =", mdp.P[0][0], "\n") print("As another example, state 5 corresponds to a hole in the ice, in which all actions lead to the same state with probability 1 and reward 0.") for i in range(4): print("P[5][%i] =" % i, mdp.P[5][i]) ###Output mdp.P is a two-level dict where the first key is the state and the second key is the action. The 2D grid cells are associated with indices [0, 1, 2, ..., 15] from left to right and top to down, as in [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]] Action indices [0, 1, 2, 3] correspond to West, South, East and North. mdp.P[state][action] is a list of tuples (probability, nextstate, reward). For example, state 0 is the initial state, and the transition information for s=0, a=0 is P[0][0] = [(0.1, 0, 0.0), (0.8, 0, 0.0), (0.1, 4, 0.0)] As another example, state 5 corresponds to a hole in the ice, in which all actions lead to the same state with probability 1 and reward 0. P[5][0] = [(1.0, 5, 0)] P[5][1] = [(1.0, 5, 0)] P[5][2] = [(1.0, 5, 0)] P[5][3] = [(1.0, 5, 0)] ###Markdown Part 1: Value Iteration Problem 1: implement value iterationIn this problem, you'll implement value iteration, which has the following pseudocode:---Initialize $V^{(0)}(s)=0$, for all $s$For $i=0, 1, 2, \dots$- $V^{(i+1)}(s) = \max_a \sum_{s'} P(s,a,s') [ R(s,a,s') + \gamma V^{(i)}(s')]$, for all $s$---We additionally define the sequence of greedy policies $\pi^{(0)}, \pi^{(1)}, \dots, \pi^{(n-1)}$, where$$\pi^{(i)}(s) = \arg \max_a \sum_{s'} P(s,a,s') [ R(s,a,s') + \gamma V^{(i)}(s')]$$Your code will return two lists: $[V^{(0)}, V^{(1)}, \dots, V^{(n)}]$ and $[\pi^{(0)}, \pi^{(1)}, \dots, \pi^{(n-1)}]$To ensure that you get the same policies as the reference solution, choose the lower-index action to break ties in $\arg \max_a$. This is done automatically by np.argmax. This will only affect the " chg actions" printout below--it won't affect the values computed.Warning: make a copy of your value function each iteration and use that copy for the update--don't update your value function in place. Updating in-place is also a valid algorithm, sometimes called Gauss-Seidel value iteration or asynchronous value iteration, but it will cause you to get different results than our reference solution (which in turn will mean that our testing code won’t be able to help in verifying your code). ###Code def value_iteration(mdp, gamma, nIt, grade_print=print): """ Inputs: mdp: MDP gamma: discount factor nIt: number of iterations, corresponding to n above Outputs: (value_functions, policies) len(value_functions) == nIt+1 and len(policies) == nIt """ grade_print("Iteration | max|V-Vprev| | # chg actions | V[0]") grade_print("----------+--------------+---------------+---------") Vs = [np.zeros(mdp.nS)] # list of value functions contains the initial value function V^{(0)}, which is zero pis = [] for it in range(nIt): oldpi = pis[-1] if len(pis) > 0 else None # \pi^{(it)} = Greedy[V^{(it-1)}]. Just used for printout Vprev = Vs[-1] # V^{(it)} # Your code should fill in meaningful values for the following two variables # pi: greedy policy for Vprev (not V), # corresponding to the math above: \pi^{(it)} = Greedy[V^{(it)}] # ** it needs to be numpy array of ints ** # V: bellman backup on Vprev # corresponding to the math above: V^{(it+1)} = T[V^{(it)}] # ** numpy array of floats ** V = np.zeros(mdp.nS) pi = np.zeros(mdp.nS) for state in range(mdp.nS): possible_actions = np.zeros(mdp.nA) for action in mdp.P[state]: for prob, next_state, reward in mdp.P[state][action]: possible_actions[action] += prob * (reward + gamma * Vprev[next_state]) V[state] = np.max(possible_actions) for state in range(mdp.nS): possible_actions = np.zeros(mdp.nA) for action in mdp.P[state]: for prob, next_state, reward in mdp.P[state][action]: possible_actions[action] += prob * (reward + gamma * Vprev[next_state]) pi[state] = np.argmax(possible_actions) max_diff = np.abs(V - Vprev).max() nChgActions="N/A" if oldpi is None else (pi != oldpi).sum() grade_print("%4i | %6.5f | %4s | %5.3f"%(it, max_diff, nChgActions, V[0])) Vs.append(V) pis.append(pi) return Vs, pis GAMMA = 0.95 # we'll be using this same value in subsequent problems # The following is the output of a correct implementation; when # this code block is run, your implementation's print output will be # compared with expected output. # (incorrect line in red background with correct line printed side by side to help you debug) expected_output = """Iteration | max|V-Vprev| | # chg actions | V[0] ----------+--------------+---------------+--------- 0 | 0.80000 | N/A | 0.000 1 | 0.60800 | 2 | 0.000 2 | 0.51984 | 2 | 0.000 3 | 0.39508 | 2 | 0.000 4 | 0.30026 | 2 | 0.000 5 | 0.25355 | 1 | 0.254 6 | 0.10478 | 0 | 0.345 7 | 0.09657 | 0 | 0.442 8 | 0.03656 | 0 | 0.478 9 | 0.02772 | 0 | 0.506 10 | 0.01111 | 0 | 0.517 11 | 0.00735 | 0 | 0.524 12 | 0.00310 | 0 | 0.527 13 | 0.00190 | 0 | 0.529 14 | 0.00083 | 0 | 0.530 15 | 0.00049 | 0 | 0.531 16 | 0.00022 | 0 | 0.531 17 | 0.00013 | 0 | 0.531 18 | 0.00006 | 0 | 0.531 19 | 0.00003 | 0 | 0.531""" Vs_VI, pis_VI = value_iteration(mdp, gamma=GAMMA, nIt=20, grade_print=make_grader(expected_output)) ###Output Iteration | max|V-Vprev| | # chg actions | V[0] ----------+--------------+---------------+--------- 0 | 0.80000 | N/A | 0.000 1 | 0.60800 | 2 | 0.000 2 | 0.51984 | 2 | 0.000 3 | 0.39508 | 2 | 0.000 4 | 0.30026 | 2 | 0.000 5 | 0.25355 | 1 | 0.254 6 | 0.10478 | 0 | 0.345 7 | 0.09657 | 0 | 0.442 8 | 0.03656 | 0 | 0.478 9 | 0.02772 | 0 | 0.506 10 | 0.01111 | 0 | 0.517 11 | 0.00735 | 0 | 0.524 12 | 0.00310 | 0 | 0.527 13 | 0.00190 | 0 | 0.529 14 | 0.00083 | 0 | 0.530 15 | 0.00049 | 0 | 0.531 16 | 0.00022 | 0 | 0.531 17 | 0.00013 | 0 | 0.531 18 | 0.00006 | 0 | 0.531 19 | 0.00003 | 0 | 0.531 Test succeeded ###Markdown Below, we've illustrated the progress of value iteration. Your optimal actions are shown by arrows.At the bottom, the value of the different states are plotted. ###Code for (V, pi) in zip(Vs_VI[:10], pis_VI[:10]): plt.figure(figsize=(3,3)) plt.imshow(V.reshape(4,4), cmap='gray', interpolation='none', clim=(0,1)) ax = plt.gca() ax.set_xticks(np.arange(4)-.5) ax.set_yticks(np.arange(4)-.5) ax.set_xticklabels([]) ax.set_yticklabels([]) Y, X = np.mgrid[0:4, 0:4] a2uv = {0: (-1, 0), 1:(0, -1), 2:(1,0), 3:(-1, 0)} Pi = pi.reshape(4,4) for y in range(4): for x in range(4): a = Pi[y, x] u, v = a2uv[a] plt.arrow(x, y,u*.3, -v*.3, color='m', head_width=0.1, head_length=0.1) plt.text(x, y, str(env.desc[y,x].item().decode()), color='g', size=12, verticalalignment='center', horizontalalignment='center', fontweight='bold') plt.grid(color='b', lw=2, ls='-') plt.figure() plt.plot(Vs_VI) plt.title("Values of different states"); ###Output _____no_output_____ ###Markdown Lab 1: Markov Decision Processes - Problem 1 Lab InstructionsAll your answers should be written in this notebook. You shouldn't need to write or modify any other files.**You should execute every block of code to not miss any dependency.***This project was developed by Peter Chen, Rocky Duan, Pieter Abbeel for the Berkeley Deep RL Bootcamp, August 2017. Bootcamp website with slides and lecture videos: https://sites.google.com/view/deep-rl-bootcamp/. It is adapted from Berkeley Deep RL Class [HW2](https://github.com/berkeleydeeprlcourse/homework/blob/c1027d83cd542e67ebed982d44666e0d22a00141/hw2/HW2.ipynb) [(license)](https://github.com/berkeleydeeprlcourse/homework/blob/master/LICENSE)*-------------------------- IntroductionThis assignment will review the two classic methods for solving Markov Decision Processes (MDPs) with finite state and action spaces.We will implement value iteration (VI) and policy iteration (PI) for a finite MDP, both of which find the optimal policy in a finite number of iterations.The experiments here will use the Frozen Lake environment, a simple gridworld MDP that is taken from `gym` and slightly modified for this assignment. In this MDP, the agent must navigate from the start state to the goal state on a 4x4 grid, with stochastic transitions. ###Code from misc import FrozenLakeEnv, make_grader env = FrozenLakeEnv() print(env.__doc__) ###Output Winter is here. You and your friends were tossing around a frisbee at the park when you made a wild throw that left the frisbee out in the middle of the lake. The water is mostly frozen, but there are a few holes where the ice has melted. If you step into one of those holes, you'll fall into the freezing water. At this time, there's an international frisbee shortage, so it's absolutely imperative that you navigate across the lake and retrieve the disc. However, the ice is slippery, so you won't always move in the direction you intend. The surface is described using a grid like the following SFFF FHFH FFFH HFFG S : starting point, safe F : frozen surface, safe H : hole, fall to your doom G : goal, where the frisbee is located The episode ends when you reach the goal or fall in a hole. You receive a reward of 1 if you reach the goal, and zero otherwise. ###Markdown Let's look at what a random episode looks like. ###Code # Some basic imports and setup import numpy as np, numpy.random as nr, gym import matplotlib.pyplot as plt %matplotlib inline np.set_printoptions(precision=3) # Seed RNGs so you get the same printouts as me env.seed(0); from gym.spaces import prng; prng.seed(10) # Generate the episode env.reset() for t in range(100): env.render() a = env.action_space.sample() ob, rew, done, _ = env.step(a) if done: break assert done env.render(); ###Output SFFF FHFH FFFH HFFG (Down) SFFF FHFH FFFH HFFG (Down) SFFF FHFH FFFH HFFG ###Markdown In the episode above, the agent falls into a hole after two timesteps. Also note the stochasticity--on the first step, the DOWN action is selected, but the agent moves to the right.We extract the relevant information from the gym Env into the MDP class below.The `env` object won't be used any further, we'll just use the `mdp` object. ###Code class MDP(object): def __init__(self, P, nS, nA, desc=None): self.P = P # state transition and reward probabilities, explained below self.nS = nS # number of states self.nA = nA # number of actions self.desc = desc # 2D array specifying what each grid cell means (used for plotting) mdp = MDP( {s : {a : [tup[:3] for tup in tups] for (a, tups) in a2d.items()} for (s, a2d) in env.P.items()}, env.nS, env.nA, env.desc) print("mdp.P is a two-level dict where the first key is the state and the second key is the action.") print("The 2D grid cells are associated with indices [0, 1, 2, ..., 15] from left to right and top to down, as in") print(np.arange(16).reshape(4,4)) print("Action indices [0, 1, 2, 3] correspond to West, South, East and North.") print("mdp.P[state][action] is a list of tuples (probability, nextstate, reward).\n") print("For example, state 0 is the initial state, and the transition information for s=0, a=0 is \nP[0][0] =", mdp.P[0][0], "\n") print("As another example, state 5 corresponds to a hole in the ice, in which all actions lead to the same state with probability 1 and reward 0.") for i in range(4): print("P[5][%i] =" % i, mdp.P[5][i]) ###Output mdp.P is a two-level dict where the first key is the state and the second key is the action. The 2D grid cells are associated with indices [0, 1, 2, ..., 15] from left to right and top to down, as in [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]] Action indices [0, 1, 2, 3] correspond to West, South, East and North. mdp.P[state][action] is a list of tuples (probability, nextstate, reward). For example, state 0 is the initial state, and the transition information for s=0, a=0 is P[0][0] = [(0.1, 0, 0.0), (0.8, 0, 0.0), (0.1, 4, 0.0)] As another example, state 5 corresponds to a hole in the ice, in which all actions lead to the same state with probability 1 and reward 0. P[5][0] = [(1.0, 5, 0)] P[5][1] = [(1.0, 5, 0)] P[5][2] = [(1.0, 5, 0)] P[5][3] = [(1.0, 5, 0)] ###Markdown Part 1: Value Iteration Problem 1: implement value iterationIn this problem, you'll implement value iteration, which has the following pseudocode:---Initialize $V^{(0)}(s)=0$, for all $s$For $i=0, 1, 2, \dots$- $V^{(i+1)}(s) = \max_a \sum_{s'} P(s,a,s') [ R(s,a,s') + \gamma V^{(i)}(s')]$, for all $s$---We additionally define the sequence of greedy policies $\pi^{(0)}, \pi^{(1)}, \dots, \pi^{(n-1)}$, where$$\pi^{(i)}(s) = \arg \max_a \sum_{s'} P(s,a,s') [ R(s,a,s') + \gamma V^{(i)}(s')]$$Your code will return two lists: $[V^{(0)}, V^{(1)}, \dots, V^{(n)}]$ and $[\pi^{(0)}, \pi^{(1)}, \dots, \pi^{(n-1)}]$To ensure that you get the same policies as the reference solution, choose the lower-index action to break ties in $\arg \max_a$. This is done automatically by np.argmax. This will only affect the " chg actions" printout below--it won't affect the values computed.Warning: make a copy of your value function each iteration and use that copy for the update--don't update your value function in place. Updating in-place is also a valid algorithm, sometimes called Gauss-Seidel value iteration or asynchronous value iteration, but it will cause you to get different results than our reference solution (which in turn will mean that our testing code won’t be able to help in verifying your code). ###Code def value_iteration(mdp, gamma, nIt, grade_print=print): """ Inputs: mdp: MDP gamma: discount factor nIt: number of iterations, corresponding to n above Outputs: (value_functions, policies) len(value_functions) == nIt+1 and len(policies) == nIt """ grade_print("Iteration | max|V-Vprev| | # chg actions | V[0]") grade_print("----------+--------------+---------------+---------") Vs = [np.zeros(mdp.nS)] # list of value functions contains the initial value function V^{(0)}, which is zero pis = [] for it in range(nIt): oldpi = pis[-1] if len(pis) > 0 else None # \pi^{(it)} = Greedy[V^{(it-1)}]. Just used for printout Vprev = Vs[-1] # V^{(it)} # Your code should fill in meaningful values for the following two variables # pi: greedy policy for Vprev (not V), # corresponding to the math above: \pi^{(it)} = Greedy[V^{(it)}] # ** it needs to be numpy array of ints ** # V: bellman backup on Vprev # corresponding to the math above: V^{(it+1)} = T[V^{(it)}] # ** numpy array of floats ** V = np.copy(Vprev) pi = np.zeros(mdp.nS) # iterate over all states for state in range(mdp.nS): best_val_state, best_act_state = V[state], pi[state] # iterate over all actions for act in range(mdp.nA): val_act = 0 for prob, next_state, reward in mdp.P[state][act]: val_act += prob * (reward + gamma * Vprev[next_state]) # set best value / action if val_act > best_val_state: best_val_state, best_act_state = val_act, act V[state], pi[state] = best_val_state, best_act_state max_diff = np.abs(V - Vprev).max() nChgActions="N/A" if oldpi is None else (pi != oldpi).sum() grade_print("%4i | %6.5f | %4s | %5.3f"%(it, max_diff, nChgActions, V[0])) Vs.append(V) pis.append(pi) return Vs, pis GAMMA = 0.95 # we'll be using this same value in subsequent problems # The following is the output of a correct implementation; when # this code block is run, your implementation's print output will be # compared with expected output. # (incorrect line in red background with correct line printed side by side to help you debug) expected_output = """Iteration | max|V-Vprev| | # chg actions | V[0] ----------+--------------+---------------+--------- 0 | 0.80000 | N/A | 0.000 1 | 0.60800 | 2 | 0.000 2 | 0.51984 | 2 | 0.000 3 | 0.39508 | 2 | 0.000 4 | 0.30026 | 2 | 0.000 5 | 0.25355 | 1 | 0.254 6 | 0.10478 | 0 | 0.345 7 | 0.09657 | 0 | 0.442 8 | 0.03656 | 0 | 0.478 9 | 0.02772 | 0 | 0.506 10 | 0.01111 | 0 | 0.517 11 | 0.00735 | 0 | 0.524 12 | 0.00310 | 0 | 0.527 13 | 0.00190 | 0 | 0.529 14 | 0.00083 | 0 | 0.530 15 | 0.00049 | 0 | 0.531 16 | 0.00022 | 0 | 0.531 17 | 0.00013 | 0 | 0.531 18 | 0.00006 | 0 | 0.531 19 | 0.00003 | 0 | 0.531""" Vs_VI, pis_VI = value_iteration(mdp, gamma=GAMMA, nIt=20, grade_print=make_grader(expected_output)) ###Output Iteration | max|V-Vprev| | # chg actions | V[0] ----------+--------------+---------------+--------- 0 | 0.80000 | N/A | 0.000 1 | 0.60800 | 2 | 0.000 2 | 0.51984 | 2 | 0.000 3 | 0.39508 | 2 | 0.000 4 | 0.30026 | 2 | 0.000 5 | 0.25355 | 1 | 0.254 6 | 0.10478 | 0 | 0.345 7 | 0.09657 | 0 | 0.442 8 | 0.03656 | 0 | 0.478 9 | 0.02772 | 0 | 0.506 10 | 0.01111 | 0 | 0.517 11 | 0.00735 | 0 | 0.524 12 | 0.00310 | 0 | 0.527 13 | 0.00190 | 0 | 0.529 14 | 0.00083 | 0 | 0.530 15 | 0.00049 | 0 | 0.531 16 | 0.00022 | 0 | 0.531 17 | 0.00013 | 0 | 0.531 18 | 0.00006 | 0 | 0.531 19 | 0.00003 | 0 | 0.531 Test succeeded ###Markdown Below, we've illustrated the progress of value iteration. Your optimal actions are shown by arrows.At the bottom, the value of the different states are plotted. ###Code for (V, pi) in zip(Vs_VI[:10], pis_VI[:10]): plt.figure(figsize=(3,3)) plt.imshow(V.reshape(4,4), cmap='gray', interpolation='none', clim=(0,1)) ax = plt.gca() ax.set_xticks(np.arange(4)-.5) ax.set_yticks(np.arange(4)-.5) ax.set_xticklabels([]) ax.set_yticklabels([]) Y, X = np.mgrid[0:4, 0:4] a2uv = {0: (-1, 0), 1:(0, -1), 2:(1,0), 3:(-1, 0)} Pi = pi.reshape(4,4) for y in range(4): for x in range(4): a = Pi[y, x] u, v = a2uv[a] plt.arrow(x, y,u*.3, -v*.3, color='m', head_width=0.1, head_length=0.1) plt.text(x, y, str(env.desc[y,x].item().decode()), color='g', size=12, verticalalignment='center', horizontalalignment='center', fontweight='bold') plt.grid(color='b', lw=2, ls='-') plt.figure() plt.plot(Vs_VI) plt.title("Values of different states"); ###Output _____no_output_____ ###Markdown Lab 1: Markov Decision Processes - Problem 1 Lab InstructionsAll your answers should be written in this notebook. You shouldn't need to write or modify any other files.**You should execute every block of code to not miss any dependency.***This project was developed by Peter Chen, Rocky Duan, Pieter Abbeel for the Berkeley Deep RL Bootcamp, August 2017. Bootcamp website with slides and lecture videos: https://sites.google.com/view/deep-rl-bootcamp/. It is adapted from Berkeley Deep RL Class [HW2](https://github.com/berkeleydeeprlcourse/homework/blob/c1027d83cd542e67ebed982d44666e0d22a00141/hw2/HW2.ipynb) [(license)](https://github.com/berkeleydeeprlcourse/homework/blob/master/LICENSE)*-------------------------- IntroductionThis assignment will review the two classic methods for solving Markov Decision Processes (MDPs) with finite state and action spaces.We will implement value iteration (VI) and policy iteration (PI) for a finite MDP, both of which find the optimal policy in a finite number of iterations.The experiments here will use the Frozen Lake environment, a simple gridworld MDP that is taken from `gym` and slightly modified for this assignment. In this MDP, the agent must navigate from the start state to the goal state on a 4x4 grid, with stochastic transitions. ###Code from misc import FrozenLakeEnv, make_grader env = FrozenLakeEnv() print(env.__doc__) ###Output Winter is here. You and your friends were tossing around a frisbee at the park when you made a wild throw that left the frisbee out in the middle of the lake. The water is mostly frozen, but there are a few holes where the ice has melted. If you step into one of those holes, you'll fall into the freezing water. At this time, there's an international frisbee shortage, so it's absolutely imperative that you navigate across the lake and retrieve the disc. However, the ice is slippery, so you won't always move in the direction you intend. The surface is described using a grid like the following SFFF FHFH FFFH HFFG S : starting point, safe F : frozen surface, safe H : hole, fall to your doom G : goal, where the frisbee is located The episode ends when you reach the goal or fall in a hole. You receive a reward of 1 if you reach the goal, and zero otherwise. ###Markdown Let's look at what a random episode looks like. ###Code # Some basic imports and setup import numpy as np, numpy.random as nr, gym import matplotlib.pyplot as plt %matplotlib inline np.set_printoptions(precision=3) # Seed RNGs so you get the same printouts as me env.seed(0); from gym.spaces import prng; prng.seed(10) # Generate the episode env.reset() for t in range(100): env.render() a = env.action_space.sample() ob, rew, done, _ = env.step(a) if done: break assert done env.render(); ###Output SFFF FHFH FFFH HFFG (Down) SFFF FHFH FFFH HFFG (Down) SFFF FHFH FFFH HFFG ###Markdown In the episode above, the agent falls into a hole after two timesteps. Also note the stochasticity--on the first step, the DOWN action is selected, but the agent moves to the right.We extract the relevant information from the gym Env into the MDP class below.The `env` object won't be used any further, we'll just use the `mdp` object. ###Code class MDP(object): def __init__(self, P, nS, nA, desc=None): self.P = P # state transition and reward probabilities, explained below self.nS = nS # number of states self.nA = nA # number of actions self.desc = desc # 2D array specifying what each grid cell means (used for plotting) mdp = MDP( {s : {a : [tup[:3] for tup in tups] for (a, tups) in a2d.items()} for (s, a2d) in env.P.items()}, env.nS, env.nA, env.desc) print("mdp.P is a two-level dict where the first key is the state and the second key is the action.") print("The 2D grid cells are associated with indices [0, 1, 2, ..., 15] from left to right and top to down, as in") print(np.arange(16).reshape(4,4)) print("Action indices [0, 1, 2, 3] correspond to West, South, East and North.") print("mdp.P[state][action] is a list of tuples (probability, nextstate, reward).\n") print("For example, state 0 is the initial state, and the transition information for s=0, a=0 is \nP[0][0] =", mdp.P[0][0], "\n") print("As another example, state 5 corresponds to a hole in the ice, in which all actions lead to the same state with probability 1 and reward 0.") for i in range(4): print("P[5][%i] =" % i, mdp.P[5][i]) ###Output mdp.P is a two-level dict where the first key is the state and the second key is the action. The 2D grid cells are associated with indices [0, 1, 2, ..., 15] from left to right and top to down, as in [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]] Action indices [0, 1, 2, 3] correspond to West, South, East and North. mdp.P[state][action] is a list of tuples (probability, nextstate, reward). For example, state 0 is the initial state, and the transition information for s=0, a=0 is P[0][0] = [(0.1, 0, 0.0), (0.8, 0, 0.0), (0.1, 4, 0.0)] As another example, state 5 corresponds to a hole in the ice, in which all actions lead to the same state with probability 1 and reward 0. P[5][0] = [(1.0, 5, 0)] P[5][1] = [(1.0, 5, 0)] P[5][2] = [(1.0, 5, 0)] P[5][3] = [(1.0, 5, 0)] ###Markdown Part 1: Value Iteration Problem 1: implement value iterationIn this problem, you'll implement value iteration, which has the following pseudocode:---Initialize $V^{(0)}(s)=0$, for all $s$For $i=0, 1, 2, \dots$- $V^{(i+1)}(s) = \max_a \sum_{s'} P(s,a,s') [ R(s,a,s') + \gamma V^{(i)}(s')]$, for all $s$---We additionally define the sequence of greedy policies $\pi^{(0)}, \pi^{(1)}, \dots, \pi^{(n-1)}$, where$$\pi^{(i)}(s) = \arg \max_a \sum_{s'} P(s,a,s') [ R(s,a,s') + \gamma V^{(i)}(s')]$$Your code will return two lists: $[V^{(0)}, V^{(1)}, \dots, V^{(n)}]$ and $[\pi^{(0)}, \pi^{(1)}, \dots, \pi^{(n-1)}]$To ensure that you get the same policies as the reference solution, choose the lower-index action to break ties in $\arg \max_a$. This is done automatically by np.argmax. This will only affect the " chg actions" printout below--it won't affect the values computed.Warning: make a copy of your value function each iteration and use that copy for the update--don't update your value function in place. Updating in-place is also a valid algorithm, sometimes called Gauss-Seidel value iteration or asynchronous value iteration, but it will cause you to get different results than our reference solution (which in turn will mean that our testing code won’t be able to help in verifying your code). ###Code def value_iteration(mdp, gamma, nIt, grade_print=print): """ Inputs: mdp: MDP gamma: discount factor nIt: number of iterations, corresponding to n above Outputs: (value_functions, policies) len(value_functions) == nIt+1 and len(policies) == nIt """ grade_print("Iteration | max|V-Vprev| | # chg actions | V[0]") grade_print("----------+--------------+---------------+---------") Vs = [np.zeros(mdp.nS)] # list of value functions contains the initial value function V^{(0)}, which is zero pis = [] for it in range(nIt): oldpi = pis[-1] if len(pis) > 0 else None # \pi^{(it)} = Greedy[V^{(it-1)}]. Just used for printout Vprev = Vs[-1] # V^{(it)} # Your code should fill in meaningful values for the following two variables # pi: greedy policy for Vprev (not V), # corresponding to the math above: \pi^{(it)} = Greedy[V^{(it)}] # ** it needs to be numpy array of ints ** # V: bellman backup on Vprev # corresponding to the math above: V^{(it+1)} = T[V^{(it)}] # ** numpy array of floats ** V = Vprev # REPLACE THIS LINE WITH YOUR CODE pi = oldpi # REPLACE THIS LINE WITH YOUR CODE max_diff = np.abs(V - Vprev).max() nChgActions="N/A" if oldpi is None else (pi != oldpi).sum() grade_print("%4i | %6.5f | %4s | %5.3f"%(it, max_diff, nChgActions, V[0])) Vs.append(V) pis.append(pi) return Vs, pis GAMMA = 0.95 # we'll be using this same value in subsequent problems # The following is the output of a correct implementation; when # this code block is run, your implementation's print output will be # compared with expected output. # (incorrect line in red background with correct line printed side by side to help you debug) expected_output = """Iteration | max|V-Vprev| | # chg actions | V[0] ----------+--------------+---------------+--------- 0 | 0.80000 | N/A | 0.000 1 | 0.60800 | 2 | 0.000 2 | 0.51984 | 2 | 0.000 3 | 0.39508 | 2 | 0.000 4 | 0.30026 | 2 | 0.000 5 | 0.25355 | 1 | 0.254 6 | 0.10478 | 0 | 0.345 7 | 0.09657 | 0 | 0.442 8 | 0.03656 | 0 | 0.478 9 | 0.02772 | 0 | 0.506 10 | 0.01111 | 0 | 0.517 11 | 0.00735 | 0 | 0.524 12 | 0.00310 | 0 | 0.527 13 | 0.00190 | 0 | 0.529 14 | 0.00083 | 0 | 0.530 15 | 0.00049 | 0 | 0.531 16 | 0.00022 | 0 | 0.531 17 | 0.00013 | 0 | 0.531 18 | 0.00006 | 0 | 0.531 19 | 0.00003 | 0 | 0.531""" Vs_VI, pis_VI = value_iteration(mdp, gamma=GAMMA, nIt=20, grade_print=make_grader(expected_output)) ###Output Iteration | max|V-Vprev| | # chg actions | V[0] ----------+--------------+---------------+---------  0 | 0.00000 | N/A | 0.000 *** Expected:  0 | 0.80000 | N/A | 0.000  1 | 0.00000 | N/A | 0.000 *** Expected:  1 | 0.60800 | 2 | 0.000  2 | 0.00000 | N/A | 0.000 *** Expected:  2 | 0.51984 | 2 | 0.000  3 | 0.00000 | N/A | 0.000 *** Expected:  3 | 0.39508 | 2 | 0.000  4 | 0.00000 | N/A | 0.000 *** Expected:  4 | 0.30026 | 2 | 0.000  5 | 0.00000 | N/A | 0.000 *** Expected:  5 | 0.25355 | 1 | 0.254  6 | 0.00000 | N/A | 0.000 *** Expected:  6 | 0.10478 | 0 | 0.345  7 | 0.00000 | N/A | 0.000 *** Expected:  7 | 0.09657 | 0 | 0.442  8 | 0.00000 | N/A | 0.000 *** Expected:  8 | 0.03656 | 0 | 0.478  9 | 0.00000 | N/A | 0.000 *** Expected:  9 | 0.02772 | 0 | 0.506  10 | 0.00000 | N/A | 0.000 *** Expected:  10 | 0.01111 | 0 | 0.517  11 | 0.00000 | N/A | 0.000 *** Expected:  11 | 0.00735 | 0 | 0.524  12 | 0.00000 | N/A | 0.000 *** Expected:  12 | 0.00310 | 0 | 0.527  13 | 0.00000 | N/A | 0.000 *** Expected:  13 | 0.00190 | 0 | 0.529  14 | 0.00000 | N/A | 0.000 *** Expected:  14 | 0.00083 | 0 | 0.530  15 | 0.00000 | N/A | 0.000 *** Expected:  15 | 0.00049 | 0 | 0.531  16 | 0.00000 | N/A | 0.000 *** Expected:  16 | 0.00022 | 0 | 0.531  17 | 0.00000 | N/A | 0.000 *** Expected:  17 | 0.00013 | 0 | 0.531  18 | 0.00000 | N/A | 0.000 *** Expected:  18 | 0.00006 | 0 | 0.531  19 | 0.00000 | N/A | 0.000 *** Expected:  19 | 0.00003 | 0 | 0.531 Test failed ###Markdown Below, we've illustrated the progress of value iteration. Your optimal actions are shown by arrows.At the bottom, the value of the different states are plotted. ###Code for (V, pi) in zip(Vs_VI[:10], pis_VI[:10]): plt.figure(figsize=(3,3)) plt.imshow(V.reshape(4,4), cmap='gray', interpolation='none', clim=(0,1)) ax = plt.gca() ax.set_xticks(np.arange(4)-.5) ax.set_yticks(np.arange(4)-.5) ax.set_xticklabels([]) ax.set_yticklabels([]) Y, X = np.mgrid[0:4, 0:4] a2uv = {0: (-1, 0), 1:(0, -1), 2:(1,0), 3:(-1, 0)} Pi = pi.reshape(4,4) for y in range(4): for x in range(4): a = Pi[y, x] u, v = a2uv[a] plt.arrow(x, y,u*.3, -v*.3, color='m', head_width=0.1, head_length=0.1) plt.text(x, y, str(env.desc[y,x].item().decode()), color='g', size=12, verticalalignment='center', horizontalalignment='center', fontweight='bold') plt.grid(color='b', lw=2, ls='-') plt.figure() plt.plot(Vs_VI) plt.title("Values of different states"); ###Output _____no_output_____ ###Markdown Lab 1: Markov Decision Processes - Problem 1 Lab InstructionsAll your answers should be written in this notebook. You shouldn't need to write or modify any other files.**You should execute every block of code to not miss any dependency.***This project was developed by Peter Chen, Rocky Duan, Pieter Abbeel for the Berkeley Deep RL Bootcamp, August 2017. Bootcamp website with slides and lecture videos: https://sites.google.com/view/deep-rl-bootcamp/. It is adapted from Berkeley Deep RL Class [HW2](https://github.com/berkeleydeeprlcourse/homework/blob/c1027d83cd542e67ebed982d44666e0d22a00141/hw2/HW2.ipynb) [(license)](https://github.com/berkeleydeeprlcourse/homework/blob/master/LICENSE)*-------------------------- IntroductionThis assignment will review the two classic methods for solving Markov Decision Processes (MDPs) with finite state and action spaces.We will implement value iteration (VI) and policy iteration (PI) for a finite MDP, both of which find the optimal policy in a finite number of iterations.The experiments here will use the Frozen Lake environment, a simple gridworld MDP that is taken from `gym` and slightly modified for this assignment. In this MDP, the agent must navigate from the start state to the goal state on a 4x4 grid, with stochastic transitions. ###Code from misc import FrozenLakeEnv, make_grader env = FrozenLakeEnv() print(env.__doc__) ###Output Winter is here. You and your friends were tossing around a frisbee at the park when you made a wild throw that left the frisbee out in the middle of the lake. The water is mostly frozen, but there are a few holes where the ice has melted. If you step into one of those holes, you'll fall into the freezing water. At this time, there's an international frisbee shortage, so it's absolutely imperative that you navigate across the lake and retrieve the disc. However, the ice is slippery, so you won't always move in the direction you intend. The surface is described using a grid like the following SFFF FHFH FFFH HFFG S : starting point, safe F : frozen surface, safe H : hole, fall to your doom G : goal, where the frisbee is located The episode ends when you reach the goal or fall in a hole. You receive a reward of 1 if you reach the goal, and zero otherwise. ###Markdown Let's look at what a random episode looks like. ###Code # Some basic imports and setup import numpy as np, numpy.random as nr, gym import matplotlib.pyplot as plt %matplotlib inline np.set_printoptions(precision=3) # Seed RNGs so you get the same printouts as me env.seed(0); from gym.spaces import prng; prng.seed(10) print(env.P.items()) # Generate the episode env.reset() for t in range(100): env.render() a = env.action_space.sample() ob, rew, done, _ = env.step(a) print(ob, rew, done) if done: break assert done env.render(); ###Output dict_items([(0, {0: [(0.1, 0, 0.0, False), (0.8, 0, 0.0, False), (0.1, 4, 0.0, False)], 1: [(0.1, 0, 0.0, False), (0.8, 4, 0.0, False), (0.1, 1, 0.0, False)], 2: [(0.1, 4, 0.0, False), (0.8, 1, 0.0, False), (0.1, 0, 0.0, False)], 3: [(0.1, 1, 0.0, False), (0.8, 0, 0.0, False), (0.1, 0, 0.0, False)]}), (1, {0: [(0.1, 1, 0.0, False), (0.8, 0, 0.0, False), (0.1, 5, 0.0, True)], 1: [(0.1, 0, 0.0, False), (0.8, 5, 0.0, True), (0.1, 2, 0.0, False)], 2: [(0.1, 5, 0.0, True), (0.8, 2, 0.0, False), (0.1, 1, 0.0, False)], 3: [(0.1, 2, 0.0, False), (0.8, 1, 0.0, False), (0.1, 0, 0.0, False)]}), (2, {0: [(0.1, 2, 0.0, False), (0.8, 1, 0.0, False), (0.1, 6, 0.0, False)], 1: [(0.1, 1, 0.0, False), (0.8, 6, 0.0, False), (0.1, 3, 0.0, False)], 2: [(0.1, 6, 0.0, False), (0.8, 3, 0.0, False), (0.1, 2, 0.0, False)], 3: [(0.1, 3, 0.0, False), (0.8, 2, 0.0, False), (0.1, 1, 0.0, False)]}), (3, {0: [(0.1, 3, 0.0, False), (0.8, 2, 0.0, False), (0.1, 7, 0.0, True)], 1: [(0.1, 2, 0.0, False), (0.8, 7, 0.0, True), (0.1, 3, 0.0, False)], 2: [(0.1, 7, 0.0, True), (0.8, 3, 0.0, False), (0.1, 3, 0.0, False)], 3: [(0.1, 3, 0.0, False), (0.8, 3, 0.0, False), (0.1, 2, 0.0, False)]}), (4, {0: [(0.1, 0, 0.0, False), (0.8, 4, 0.0, False), (0.1, 8, 0.0, False)], 1: [(0.1, 4, 0.0, False), (0.8, 8, 0.0, False), (0.1, 5, 0.0, True)], 2: [(0.1, 8, 0.0, False), (0.8, 5, 0.0, True), (0.1, 0, 0.0, False)], 3: [(0.1, 5, 0.0, True), (0.8, 0, 0.0, False), (0.1, 4, 0.0, False)]}), (5, {0: [(1.0, 5, 0, True)], 1: [(1.0, 5, 0, True)], 2: [(1.0, 5, 0, True)], 3: [(1.0, 5, 0, True)]}), (6, {0: [(0.1, 2, 0.0, False), (0.8, 5, 0.0, True), (0.1, 10, 0.0, False)], 1: [(0.1, 5, 0.0, True), (0.8, 10, 0.0, False), (0.1, 7, 0.0, True)], 2: [(0.1, 10, 0.0, False), (0.8, 7, 0.0, True), (0.1, 2, 0.0, False)], 3: [(0.1, 7, 0.0, True), (0.8, 2, 0.0, False), (0.1, 5, 0.0, True)]}), (7, {0: [(1.0, 7, 0, True)], 1: [(1.0, 7, 0, True)], 2: [(1.0, 7, 0, True)], 3: [(1.0, 7, 0, True)]}), (8, {0: [(0.1, 4, 0.0, False), (0.8, 8, 0.0, False), (0.1, 12, 0.0, True)], 1: [(0.1, 8, 0.0, False), (0.8, 12, 0.0, True), (0.1, 9, 0.0, False)], 2: [(0.1, 12, 0.0, True), (0.8, 9, 0.0, False), (0.1, 4, 0.0, False)], 3: [(0.1, 9, 0.0, False), (0.8, 4, 0.0, False), (0.1, 8, 0.0, False)]}), (9, {0: [(0.1, 5, 0.0, True), (0.8, 8, 0.0, False), (0.1, 13, 0.0, False)], 1: [(0.1, 8, 0.0, False), (0.8, 13, 0.0, False), (0.1, 10, 0.0, False)], 2: [(0.1, 13, 0.0, False), (0.8, 10, 0.0, False), (0.1, 5, 0.0, True)], 3: [(0.1, 10, 0.0, False), (0.8, 5, 0.0, True), (0.1, 8, 0.0, False)]}), (10, {0: [(0.1, 6, 0.0, False), (0.8, 9, 0.0, False), (0.1, 14, 0.0, False)], 1: [(0.1, 9, 0.0, False), (0.8, 14, 0.0, False), (0.1, 11, 0.0, True)], 2: [(0.1, 14, 0.0, False), (0.8, 11, 0.0, True), (0.1, 6, 0.0, False)], 3: [(0.1, 11, 0.0, True), (0.8, 6, 0.0, False), (0.1, 9, 0.0, False)]}), (11, {0: [(1.0, 11, 0, True)], 1: [(1.0, 11, 0, True)], 2: [(1.0, 11, 0, True)], 3: [(1.0, 11, 0, True)]}), (12, {0: [(1.0, 12, 0, True)], 1: [(1.0, 12, 0, True)], 2: [(1.0, 12, 0, True)], 3: [(1.0, 12, 0, True)]}), (13, {0: [(0.1, 9, 0.0, False), (0.8, 12, 0.0, True), (0.1, 13, 0.0, False)], 1: [(0.1, 12, 0.0, True), (0.8, 13, 0.0, False), (0.1, 14, 0.0, False)], 2: [(0.1, 13, 0.0, False), (0.8, 14, 0.0, False), (0.1, 9, 0.0, False)], 3: [(0.1, 14, 0.0, False), (0.8, 9, 0.0, False), (0.1, 12, 0.0, True)]}), (14, {0: [(0.1, 10, 0.0, False), (0.8, 13, 0.0, False), (0.1, 14, 0.0, False)], 1: [(0.1, 13, 0.0, False), (0.8, 14, 0.0, False), (0.1, 15, 1.0, True)], 2: [(0.1, 14, 0.0, False), (0.8, 15, 1.0, True), (0.1, 10, 0.0, False)], 3: [(0.1, 15, 1.0, True), (0.8, 10, 0.0, False), (0.1, 13, 0.0, False)]}), (15, {0: [(1.0, 15, 0, True)], 1: [(1.0, 15, 0, True)], 2: [(1.0, 15, 0, True)], 3: [(1.0, 15, 0, True)]})]) SFFF FHFH FFFH HFFG 1 0.0 False (Down) SFFF FHFH FFFH HFFG 5 0.0 True (Down) SFFF FHFH FFFH HFFG ###Markdown In the episode above, the agent falls into a hole after two timesteps. Also note the stochasticity--on the first step, the DOWN action is selected, but the agent moves to the right.We extract the relevant information from the gym Env into the MDP class below.The `env` object won't be used any further, we'll just use the `mdp` object. ###Code class MDP(object): def __init__(self, P, nS, nA, desc=None): self.P = P # state transition and reward probabilities, explained below self.nS = nS # number of states self.nA = nA # number of actions self.desc = desc # 2D array specifying what each grid cell means (used for plotting) mdp = MDP( {s : {a : [tup[:3] for tup in tups] for (a, tups) in a2d.items()} for (s, a2d) in env.P.items()}, env.nS, env.nA, env.desc) print("mdp.P is a two-level dict where the first key is the state and the second key is the action.") print("The 2D grid cells are associated with indices [0, 1, 2, ..., 15] from left to right and top to down, as in") print(np.arange(16).reshape(4,4)) print("Action indices [0, 1, 2, 3] correspond to West, South, East and North.") print("mdp.P[state][action] is a list of tuples (probability, nextstate, reward).\n") print("For example, state 0 is the initial state, and the transition information for s=0, a=0 is \nP[0][0] =", mdp.P[0][0], "\n") print("As another example, state 5 corresponds to a hole in the ice, in which all actions lead to the same state with probability 1 and reward 0.") for i in range(4): print("P[5][%i] =" % i, mdp.P[5][i]) ###Output mdp.P is a two-level dict where the first key is the state and the second key is the action. The 2D grid cells are associated with indices [0, 1, 2, ..., 15] from left to right and top to down, as in [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]] Action indices [0, 1, 2, 3] correspond to West, South, East and North. mdp.P[state][action] is a list of tuples (probability, nextstate, reward). For example, state 0 is the initial state, and the transition information for s=0, a=0 is P[0][0] = [(0.1, 0, 0.0), (0.8, 0, 0.0), (0.1, 4, 0.0)] As another example, state 5 corresponds to a hole in the ice, in which all actions lead to the same state with probability 1 and reward 0. P[5][0] = [(1.0, 5, 0)] P[5][1] = [(1.0, 5, 0)] P[5][2] = [(1.0, 5, 0)] P[5][3] = [(1.0, 5, 0)] ###Markdown Part 1: Value Iteration Problem 1: implement value iterationIn this problem, you'll implement value iteration, which has the following pseudocode:---Initialize $V^{(0)}(s)=0$, for all $s$For $i=0, 1, 2, \dots$- $V^{(i+1)}(s) = \max_a \sum_{s'} P(s,a,s') [ R(s,a,s') + \gamma V^{(i)}(s')]$, for all $s$---We additionally define the sequence of greedy policies $\pi^{(0)}, \pi^{(1)}, \dots, \pi^{(n-1)}$, where$$\pi^{(i)}(s) = \arg \max_a \sum_{s'} P(s,a,s') [ R(s,a,s') + \gamma V^{(i)}(s')]$$Your code will return two lists: $[V^{(0)}, V^{(1)}, \dots, V^{(n)}]$ and $[\pi^{(0)}, \pi^{(1)}, \dots, \pi^{(n-1)}]$To ensure that you get the same policies as the reference solution, choose the lower-index action to break ties in $\arg \max_a$. This is done automatically by np.argmax. This will only affect the " chg actions" printout below--it won't affect the values computed.Warning: make a copy of your value function each iteration and use that copy for the update--don't update your value function in place. Updating in-place is also a valid algorithm, sometimes called Gauss-Seidel value iteration or asynchronous value iteration, but it will cause you to get different results than our reference solution (which in turn will mean that our testing code won’t be able to help in verifying your code). ###Code def value_iteration(mdp, gamma, nIt, grade_print=print): """ Inputs: mdp: MDP gamma: discount factor nIt: number of iterations, corresponding to n above Outputs: (value_functions, policies) len(value_functions) == nIt+1 and len(policies) == nIt """ grade_print("Iteration | max|V-Vprev| | # chg actions | V[0]") grade_print("----------+--------------+---------------+---------") Vs = [np.zeros(mdp.nS)] # list of value functions contains the initial value function V^{(0)}, which is zero pis = [] for it in range(nIt): oldpi = pis[-1] if len(pis) > 0 else None # \pi^{(it)} = Greedy[V^{(it-1)}]. Just used for printout Vprev = Vs[-1] # V^{(it)} # Your code should fill in meaningful values for the following two variables # pi: greedy policy for Vprev (not V), # corresponding to the math above: \pi^{(it)} = Greedy[V^{(it)}] # ** it needs to be numpy array of ints ** # V: bellman backup on Vprev # corresponding to the math above: V^{(it+1)} = T[V^{(it)}] # ** numpy array of floats ** V = np.zeros(mdp.nS) pi = np.zeros(mdp.nS) for (state, actions) in mdp.P.items(): maximum_state_value = 0 policy = 0 for (action, transitions) in actions.items(): state_value = 0 for (probability, next_state, reward) in transitions: state_value += probability * (reward + gamma * Vprev[next_state]) if state_value > maximum_state_value: maximum_state_value = state_value policy = action V[state] = maximum_state_value pi[state] = policy max_diff = np.abs(V - Vprev).max() nChgActions="N/A" if oldpi is None else (pi != oldpi).sum() grade_print("%4i | %6.5f | %4s | %5.3f"%(it, max_diff, nChgActions, V[0])) Vs.append(V) pis.append(pi) return Vs, pis GAMMA = 0.95 # we'll be using this same value in subsequent problems # The following is the output of a correct implementation; when # this code block is run, your implementation's print output will be # compared with expected output. # (incorrect line in red background with correct line printed side by side to help you debug) expected_output = """Iteration | max|V-Vprev| | # chg actions | V[0] ----------+--------------+---------------+--------- 0 | 0.80000 | N/A | 0.000 1 | 0.60800 | 2 | 0.000 2 | 0.51984 | 2 | 0.000 3 | 0.39508 | 2 | 0.000 4 | 0.30026 | 2 | 0.000 5 | 0.25355 | 1 | 0.254 6 | 0.10478 | 0 | 0.345 7 | 0.09657 | 0 | 0.442 8 | 0.03656 | 0 | 0.478 9 | 0.02772 | 0 | 0.506 10 | 0.01111 | 0 | 0.517 11 | 0.00735 | 0 | 0.524 12 | 0.00310 | 0 | 0.527 13 | 0.00190 | 0 | 0.529 14 | 0.00083 | 0 | 0.530 15 | 0.00049 | 0 | 0.531 16 | 0.00022 | 0 | 0.531 17 | 0.00013 | 0 | 0.531 18 | 0.00006 | 0 | 0.531 19 | 0.00003 | 0 | 0.531""" Vs_VI, pis_VI = value_iteration(mdp, gamma=GAMMA, nIt=20, grade_print=make_grader(expected_output)) ###Output Iteration | max|V-Vprev| | # chg actions | V[0] ----------+--------------+---------------+--------- 0 | 0.80000 | N/A | 0.000 1 | 0.60800 | 2 | 0.000 2 | 0.51984 | 2 | 0.000 3 | 0.39508 | 2 | 0.000 4 | 0.30026 | 2 | 0.000 5 | 0.25355 | 1 | 0.254 6 | 0.10478 | 0 | 0.345 7 | 0.09657 | 0 | 0.442 8 | 0.03656 | 0 | 0.478 9 | 0.02772 | 0 | 0.506 10 | 0.01111 | 0 | 0.517 11 | 0.00735 | 0 | 0.524 12 | 0.00310 | 0 | 0.527 13 | 0.00190 | 0 | 0.529 14 | 0.00083 | 0 | 0.530 15 | 0.00049 | 0 | 0.531 16 | 0.00022 | 0 | 0.531 17 | 0.00013 | 0 | 0.531 18 | 0.00006 | 0 | 0.531 19 | 0.00003 | 0 | 0.531 Test succeeded ###Markdown Below, we've illustrated the progress of value iteration. Your optimal actions are shown by arrows.At the bottom, the value of the different states are plotted. ###Code for (V, pi) in zip(Vs_VI[:10], pis_VI[:10]): plt.figure(figsize=(3,3)) plt.imshow(V.reshape(4,4), cmap='gray', interpolation='none', clim=(0,1)) ax = plt.gca() ax.set_xticks(np.arange(4)-.5) ax.set_yticks(np.arange(4)-.5) ax.set_xticklabels([]) ax.set_yticklabels([]) Y, X = np.mgrid[0:4, 0:4] a2uv = {0: (-1, 0), 1:(0, -1), 2:(1,0), 3:(-1, 0)} Pi = pi.reshape(4,4) for y in range(4): for x in range(4): a = Pi[y, x] u, v = a2uv[a] plt.arrow(x, y,u*.3, -v*.3, color='m', head_width=0.1, head_length=0.1) plt.text(x, y, str(env.desc[y,x].item().decode()), color='g', size=12, verticalalignment='center', horizontalalignment='center', fontweight='bold') plt.grid(color='b', lw=2, ls='-') plt.figure() plt.plot(Vs_VI) plt.title("Values of different states"); ###Output _____no_output_____
titanic_kaggle_competition/results/2020-04-04/Exploratory Data Analysis.ipynb
###Markdown Looking for Correlations ###Code corr_matrix = data.corr() corr_matrix['Survived'].sort_values(ascending=False) attributes = ['Age', 'Fare', 'Parch', 'Pclass', 'SibSp', 'Survived'] pd.plotting.scatter_matrix(data[attributes], figsize=(12,8)) plt.savefig('temp__CorrMatrix', format='png') ###Output _____no_output_____ ###Markdown Experimenting with Attribute Combinations ###Code data['FamilyMembers'] = data['SibSp'] + data['Parch'] corr_matrix = data.corr() corr_matrix['Survived'] ###Output _____no_output_____
notebooks/cross_validation_train_test.ipynb
###Markdown The framework and why do we need itIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.This, we will use a predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from thedollar (\\$) range to the thousand dollars (k\\$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential statisticalperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output _____no_output_____ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will consistently use the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will consistently use the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will beunstable and wouldn't reflect the "true error rate" we would have observedwith the same model on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive modelby repeating the splitting procedure. It will give several training andtesting errors and thus some **estimate of the variability of themodel statistical performance**.There are different cross-validation strategies, for now we are going tofocus on one called "shuffle-split". At each iteration of this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Using `n_splits=40` means that wewill train 40 models in total and all of them will be discarded: we justrecord their statistical performance on each variant of the test set.To evaluate the statistical performance of our regressor, we can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)with a[`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipA score is a metric for which higher values mean better results. On thecontrary, an error is a metric for which lower values mean better results.The parameter scoring in cross_validate always expect a function that isa score.To make it easy, all error metrics in scikit-learn, likemean_absolute_error, can be transformed into a score to be used incross_validate. To do so, you need to pass a string of the error metricwith an additional neg_ string at the front to the parameter scoring;for instance scoring="neg_mean_absolute_error". In this case, the negativeof the mean absolute error will be computed which would be equivalent to ascore.Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each cross-validationiteration. Also, we get the test score, which corresponds to the testingerror on each of the splits. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40splits. Therefore, we can show the testing error distribution and thus, havean estimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black", density=True) plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\\$ andranges from 43 k\\$ to 50 k\\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output _____no_output_____ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is46.36 +/- 1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then later had access to an unlimited amount of testdata, we would expect its true testing error to fall close to thatregion.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black", density=True) plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output _____no_output_____ ###Markdown The target variable ranges from close to 0 k\\$ up to 500 k\\$ and, with astandard deviation around 115 k\\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore, the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\\$. However, it would be anissue with a house with a value of 50 k\\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\\$ might be too large to automatically useour model to tag house values without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of these `fit`/`score` procedures. To make it explicit, it ispossible to retrieve theses fitted models for each of the splits/folds bypassing the option `return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you only are interested in the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown The framework and why do we need itIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.This, we will use a predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from the100 (k\\$) range to the thousand dollars (k\\$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential generalizationperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output _____no_output_____ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will consistently use the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will consistently use the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will beunstable and wouldn't reflect the "true error rate" we would have observedwith the same model on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive modelby repeating the splitting procedure. It will give several training andtesting errors and thus some **estimate of the variability of themodel generalization performance**.There are different cross-validation strategies, for now we are going tofocus on one called "shuffle-split". At each iteration of this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Using `n_splits=40` means that wewill train 40 models in total and all of them will be discarded: we justrecord their generalization performance on each variant of the test set.To evaluate the generalization performance of our regressor, we can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)with a[`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipA score is a metric for which higher values mean better results. On thecontrary, an error is a metric for which lower values mean better results.The parameter scoring in cross_validate always expect a function that isa score.To make it easy, all error metrics in scikit-learn, likemean_absolute_error, can be transformed into a score to be used incross_validate. To do so, you need to pass a string of the error metricwith an additional neg_ string at the front to the parameter scoring;for instance scoring="neg_mean_absolute_error". In this case, the negativeof the mean absolute error will be computed which would be equivalent to ascore.Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each cross-validationiteration. Also, we get the test score, which corresponds to the testingerror on each of the splits. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40splits. Therefore, we can show the testing error distribution and thus, havean estimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black", density=True) plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\\$ andranges from 43 k\\$ to 50 k\\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output _____no_output_____ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is46.36 +/- 1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then later had access to an unlimited amount of testdata, we would expect its true testing error to fall close to thatregion.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black", density=True) plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output _____no_output_____ ###Markdown The target variable ranges from close to 0 k\\$ up to 500 k\\$ and, with astandard deviation around 115 k\\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore, the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\\$. However, it would be anissue with a house with a value of 50 k\\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\\$ might be too large to automatically useour model to tag house values without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of these `fit`/`score` procedures. To make it explicit, it ispossible to retrieve these fitted models for each of the splits/folds bypassing the option `return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you only are interested in the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown Cross-validation frameworkIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.This, we will use a predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from the100 (k\\$) range to the thousand dollars (k\\$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential generalizationperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output _____no_output_____ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will consistently use the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will consistently use the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will be unstable andwouldn't reflect the "true error rate" we would have observed with the samemodel on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive model byrepeating the splitting procedure. It will give several training and testingerrors and thus some **estimate of the variability of the model generalizationperformance**.There are [different cross-validationstrategies](https://scikit-learn.org/stable/modules/cross_validation.htmlcross-validation-iterators),for now we are going to focus on one called "shuffle-split". At each iterationof this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Keep in mind that the computationalcost increases with `n_splits`.![Cross-validation diagram](../figures/shufflesplit_diagram.png)NoteThis figure shows the particular case of shuffle-split cross-validationstrategy using n_splits=5.For each cross-validation split, the procedure trains a model on all the redsamples and evaluate the score of the model on the blue samples.In this case we will set `n_splits=40`, meaning that we will train 40 modelsin total and all of them will be discarded: we just record theirgeneralization performance on each variant of the test set.To evaluate the generalization performance of our regressor, we can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)with a[`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipA score is a metric for which higher values mean better results. On thecontrary, an error is a metric for which lower values mean better results.The parameter scoring in cross_validate always expect a function that isa score.To make it easy, all error metrics in scikit-learn, likemean_absolute_error, can be transformed into a score to be used incross_validate. To do so, you need to pass a string of the error metricwith an additional neg_ string at the front to the parameter scoring;for instance scoring="neg_mean_absolute_error". In this case, the negativeof the mean absolute error will be computed which would be equivalent to ascore.Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each cross-validationiteration. Also, we get the test score, which corresponds to the testing erroron each of the splits. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40 splits.Therefore, we can show the testing error distribution and thus, have anestimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black") plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\\$ and ranges from43 k\\$ to 50 k\\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output _____no_output_____ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is 46.36 +/-1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then later had access to an unlimited amount of testdata, we would expect its true testing error to fall close to that region.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black") plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output _____no_output_____ ###Markdown The target variable ranges from close to 0 k\\$ up to 500 k\\$ and, with astandard deviation around 115 k\\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore, the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\\$. However, it would be anissue with a house with a value of 50 k\\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\\$ might be too large to automatically useour model to tag house values without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of these `fit`/`score` procedures. To make it explicit, it ispossible to retrieve these fitted models for each of the splits/folds bypassing the option `return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you only are interested in the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown Cross-validation frameworkIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.This, we will use a predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from the100 (k\\$) range to the thousand dollars (k\\$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential generalizationperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output _____no_output_____ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will consistently use the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will consistently use the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will be unstable andwouldn't reflect the "true error rate" we would have observed with the samemodel on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive model byrepeating the splitting procedure. It will give several training and testingerrors and thus some **estimate of the variability of the model generalizationperformance**.There are [different cross-validationstrategies](https://scikit-learn.org/stable/modules/cross_validation.htmlcross-validation-iterators),for now we are going to focus on one called "shuffle-split". At each iterationof this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Keep in mind that the computationalcost increases with `n_splits`.![Cross-validation diagram](../figures/shufflesplit_diagram.png)NoteThis figure shows the particular case of shuffle-split cross-validationstrategy using n_splits=5.For each cross-validation split, the procedure trains a model on all the redsamples and evaluate the score of the model on the blue samples.In this case we will set `n_splits=40`, meaning that we will train 40 modelsin total and all of them will be discarded: we just record theirgeneralization performance on each variant of the test set.To evaluate the generalization performance of our regressor, we can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)with a[`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipA score is a metric for which higher values mean better results. On thecontrary, an error is a metric for which lower values mean better results.The parameter scoring in cross_validate always expect a function that isa score.To make it easy, all error metrics in scikit-learn, likemean_absolute_error, can be transformed into a score to be used incross_validate. To do so, you need to pass a string of the error metricwith an additional neg_ string at the front to the parameter scoring;for instance scoring="neg_mean_absolute_error". In this case, the negativeof the mean absolute error will be computed which would be equivalent to ascore.Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each cross-validationiteration. Also, we get the test score, which corresponds to the testing erroron each of the splits. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40 splits.Therefore, we can show the testing error distribution and thus, have anestimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black", density=True) plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\\$ and ranges from43 k\\$ to 50 k\\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output _____no_output_____ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is 46.36 +/-1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then later had access to an unlimited amount of testdata, we would expect its true testing error to fall close to that region.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black", density=True) plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output _____no_output_____ ###Markdown The target variable ranges from close to 0 k\\$ up to 500 k\\$ and, with astandard deviation around 115 k\\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore, the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\\$. However, it would be anissue with a house with a value of 50 k\\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\\$ might be too large to automatically useour model to tag house values without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of these `fit`/`score` procedures. To make it explicit, it ispossible to retrieve these fitted models for each of the splits/folds bypassing the option `return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you only are interested in the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown The framework and why do we need itIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown Caution!Here and later, we use the name data and target to be explicit. Inscikit-learn, documentation data is commonly named X and target iscommonly called y. In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.Therefore, we will use predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from thedollar ($) range to the thousand dollars (k$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential statisticalperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output _____no_output_____ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will use consistently the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will use consistently the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will beunstable and wouldn't reflect the "true error rate" we would have observedwith the same model on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive modelby repeating the splitting procedure. It will give several training andtesting errors and thus some **estimate of the variability of themodel statistical performance**.There are different cross-validation strategies, for now we are going tofocus on one called "shuffle-split". At each iteration of this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Using `n_splits=40` means that wewill train 40 models in total and all of them will be discarded: we justrecord their statistical performance on each variant of the test set.To evaluate the statistical performance of our regressor, we can use`cross_validate` with a `ShuffleSplit` object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipBy convention, scikit-learn model evaluation tools always use a conventionwhere "higher is better", this explains we usedscoring="neg_mean_absolute_error" (meaning "negative mean absolute error").Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each round ofcross-validation. Also, we get the test score, which corresponds to thetesting error on each of the split. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40splits. Therefore, we can show the testing error distribution and thus, havean estimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black", density=True) plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\$ andranges from 43 k\$ to 50 k\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output _____no_output_____ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is46.36 +/- 1.17 k\$.If we were to train a single model on the full dataset (withoutcross-validation) and then had later access to an unlimited amount of testdata, we would expect its true testing error to fall close to thatregion.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black", density=True) plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output _____no_output_____ ###Markdown The target variable ranges from close to 0 k\$ up to 500 k\$ and, with astandard deviation around 115 k\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\$. However, it would be anissue with a house with a value of 50 k\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\$ might be too large to automatically useour model to tag house value without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of this `fit`/`score`. To make it explicit, it is possibleto retrieve theses fitted models for each of the fold by passing the option`return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you are interested only about the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown The framework and why do we need itIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target data ###Output _____no_output_____ ###Markdown In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.This, we will use a predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from thedollar (\\$) range to the thousand dollars (k\\$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential statisticalperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output On average, our regressor makes an error of 0.00 k$ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will consistently use the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will consistently use the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output The training error of our model is 0.00 k$ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output The testing error of our model is 47.28 k$ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will beunstable and wouldn't reflect the "true error rate" we would have observedwith the same model on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive modelby repeating the splitting procedure. It will give several training andtesting errors and thus some **estimate of the variability of themodel statistical performance**.There are different cross-validation strategies, for now we are going tofocus on one called "shuffle-split". At each iteration of this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Using `n_splits=40` means that wewill train 40 models in total and all of them will be discarded: we justrecord their statistical performance on each variant of the test set.To evaluate the statistical performance of our regressor, we can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)with a[`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipA score is a metric for which higher values mean better results. On thecontrary, an error is a metric for which lower values mean better results.The parameter scoring in cross_validate always expect a function that isa score.To make it easy, all error metrics in scikit-learn, likemean_absolute_error, can be transformed into a score to be used incross_validate. To do so, you need to pass a string of the error metricwith an additional neg_ string at the front to the parameter scoring;for instance scoring="neg_mean_absolute_error". In this case, the negativeof the mean absolute error will be computed which would be equivalent to ascore.Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each cross-validationiteration. Also, we get the test score, which corresponds to the testingerror on each of the splits. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40splits. Therefore, we can show the testing error distribution and thus, havean estimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black", density=True) plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\$ and ranges from 43 k\$ to 50 k\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output The standard deviation of the testing error is: 1.17 k$ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is46.36 +/- 1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then later had access to an unlimited amount of testdata, we would expect its true testing error to fall close to thatregion.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black", density=True) plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output The standard deviation of the target is: 115.40 k$ ###Markdown The target variable ranges from close to 0 k\$ up to 500 k\$ and, with astandard deviation around 115 k\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore, the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\$. However, it would be anissue with a house with a value of 50 k\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\$ might be too large to automatically useour model to tag house values without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of these `fit`/`score` procedures. To make it explicit, it ispossible to retrieve theses fitted models for each of the splits/folds bypassing the option `return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you only are interested in the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown The framework and why do we need itIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.This, we will use a predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from the100 (k\\$) range to the thousand dollars (k\\$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential statisticalperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output _____no_output_____ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will consistently use the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will consistently use the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will beunstable and wouldn't reflect the "true error rate" we would have observedwith the same model on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive modelby repeating the splitting procedure. It will give several training andtesting errors and thus some **estimate of the variability of themodel statistical performance**.There are different cross-validation strategies, for now we are going tofocus on one called "shuffle-split". At each iteration of this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Using `n_splits=40` means that wewill train 40 models in total and all of them will be discarded: we justrecord their statistical performance on each variant of the test set.To evaluate the statistical performance of our regressor, we can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)with a[`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipA score is a metric for which higher values mean better results. On thecontrary, an error is a metric for which lower values mean better results.The parameter scoring in cross_validate always expect a function that isa score.To make it easy, all error metrics in scikit-learn, likemean_absolute_error, can be transformed into a score to be used incross_validate. To do so, you need to pass a string of the error metricwith an additional neg_ string at the front to the parameter scoring;for instance scoring="neg_mean_absolute_error". In this case, the negativeof the mean absolute error will be computed which would be equivalent to ascore.Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each cross-validationiteration. Also, we get the test score, which corresponds to the testingerror on each of the splits. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40splits. Therefore, we can show the testing error distribution and thus, havean estimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black", density=True) plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\\$ andranges from 43 k\\$ to 50 k\\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output _____no_output_____ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is46.36 +/- 1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then later had access to an unlimited amount of testdata, we would expect its true testing error to fall close to thatregion.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black", density=True) plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output _____no_output_____ ###Markdown The target variable ranges from close to 0 k\\$ up to 500 k\\$ and, with astandard deviation around 115 k\\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore, the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\\$. However, it would be anissue with a house with a value of 50 k\\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\\$ might be too large to automatically useour model to tag house values without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of these `fit`/`score` procedures. To make it explicit, it ispossible to retrieve theses fitted models for each of the splits/folds bypassing the option `return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you only are interested in the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown The framework and why do we need itIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.This, we will use a predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from the100 (k\\$) range to the thousand dollars (k\\$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential statisticalperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output On average, our regressor makes an error of 0.00 k$ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will consistently use the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will consistently use the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output The training error of our model is 0.00 k$ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output The testing error of our model is 47.28 k$ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will beunstable and wouldn't reflect the "true error rate" we would have observedwith the same model on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive modelby repeating the splitting procedure. It will give several training andtesting errors and thus some **estimate of the variability of themodel statistical performance**.There are different cross-validation strategies, for now we are going tofocus on one called "shuffle-split". At each iteration of this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Using `n_splits=40` means that wewill train 40 models in total and all of them will be discarded: we justrecord their statistical performance on each variant of the test set.To evaluate the statistical performance of our regressor, we can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)with a[`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipA score is a metric for which higher values mean better results. On thecontrary, an error is a metric for which lower values mean better results.The parameter scoring in cross_validate always expect a function that isa score.To make it easy, all error metrics in scikit-learn, likemean_absolute_error, can be transformed into a score to be used incross_validate. To do so, you need to pass a string of the error metricwith an additional neg_ string at the front to the parameter scoring;for instance scoring="neg_mean_absolute_error". In this case, the negativeof the mean absolute error will be computed which would be equivalent to ascore.Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each cross-validationiteration. Also, we get the test score, which corresponds to the testingerror on each of the splits. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40splits. Therefore, we can show the testing error distribution and thus, havean estimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black", density=True) plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\\$ andranges from 43 k\\$ to 50 k\\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output The standard deviation of the testing error is: 1.17 k$ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is46.36 +/- 1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then later had access to an unlimited amount of testdata, we would expect its true testing error to fall close to thatregion.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black", density=True) plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output The standard deviation of the target is: 115.40 k$ ###Markdown The target variable ranges from close to 0 k\\$ up to 500 k\\$ and, with astandard deviation around 115 k\\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore, the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\\$. However, it would be anissue with a house with a value of 50 k\\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\\$ might be too large to automatically useour model to tag house values without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of these `fit`/`score` procedures. To make it explicit, it ispossible to retrieve theses fitted models for each of the splits/folds bypassing the option `return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you only are interested in the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown The framework and why do we need itIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown Caution!Here and later, we use the name data and target to be explicit. Inscikit-learn documentation, data is commonly named X and target iscommonly called y. In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.Therefore, we will use predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from thedollar (\\$) range to the thousand dollars (k\\$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential statisticalperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output _____no_output_____ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will consistently use the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will consistently use the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will beunstable and wouldn't reflect the "true error rate" we would have observedwith the same model on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive modelby repeating the splitting procedure. It will give several training andtesting errors and thus some **estimate of the variability of themodel statistical performance**.There are different cross-validation strategies, for now we are going tofocus on one called "shuffle-split". At each iteration of this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Using `n_splits=40` means that wewill train 40 models in total and all of them will be discarded: we justrecord their statistical performance on each variant of the test set.To evaluate the statistical performance of our regressor, we can use`cross_validate` with a `ShuffleSplit` object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipA score is a metric for which higher values mean better results. On thecontrary, an error is a metric for which lower values mean better results.The parameter scoring in cross_validate always expect a function that isa score.To make it easy, all error metrics in scikit-learn, likemean_absolute_error, can be transformed into a score to be used incross_validate. To do so, you need to pass a string of the error metricwith an additional neg_ string at the front to the parameter scoring;for instance scoring="neg_mean_absolute_error". In this case, the negativeof the mean absolute error will be computed which would be equivalent to ascore.Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each round ofcross-validation. Also, we get the test score, which corresponds to thetesting error on each of the split. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40splits. Therefore, we can show the testing error distribution and thus, havean estimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black", density=True) plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\\$ andranges from 43 k\\$ to 50 k\\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output _____no_output_____ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is46.36 +/- 1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then had later access to an unlimited amount of testdata, we would expect its true testing error to fall close to thatregion.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black", density=True) plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output _____no_output_____ ###Markdown The target variable ranges from close to 0 k\\$ up to 500 k\\$ and, with astandard deviation around 115 k\\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\\$. However, it would be anissue with a house with a value of 50 k\\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\\$ might be too large to automatically useour model to tag house value without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of this `fit`/`score`. To make it explicit, it is possibleto retrieve theses fitted models for each of the fold by passing the option`return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you are interested only about the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown Cross-validation frameworkIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.This, we will use a predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from the100 (k\\$) range to the thousand dollars (k\\$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential generalizationperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output _____no_output_____ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will consistently use the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will consistently use the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will be unstable andwouldn't reflect the "true error rate" we would have observed with the samemodel on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive model byrepeating the splitting procedure. It will give several training and testingerrors and thus some **estimate of the variability of the model generalizationperformance**.There are [different cross-validationstrategies](https://scikit-learn.org/stable/modules/cross_validation.htmlcross-validation-iterators),for now we are going to focus on one called "shuffle-split". At each iterationof this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Keep in mind that the computationalcost increases with `n_splits`.![Cross-validation diagram](../figures/shufflesplit_diagram.png)NoteThis figure shows the particular case of shuffle-split cross-validationstrategy using n_splits=5.For each cross-validation split, the procedure trains a model on all the redsamples and evaluate the score of the model on the blue samples.In this case we will set `n_splits=40`, meaning that we will train 40 modelsin total and all of them will be discarded: we just record theirgeneralization performance on each variant of the test set.To evaluate the generalization performance of our regressor, we can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)with a[`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipA score is a metric for which higher values mean better results. On thecontrary, an error is a metric for which lower values mean better results.The parameter scoring in cross_validate always expect a function that isa score.To make it easy, all error metrics in scikit-learn, likemean_absolute_error, can be transformed into a score to be used incross_validate. To do so, you need to pass a string of the error metricwith an additional neg_ string at the front to the parameter scoring;for instance scoring="neg_mean_absolute_error". In this case, the negativeof the mean absolute error will be computed which would be equivalent to ascore.Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each cross-validationiteration. Also, we get the test score, which corresponds to the testing erroron each of the splits. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40 splits.Therefore, we can show the testing error distribution and thus, have anestimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black") plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\\$ and ranges from43 k\\$ to 50 k\\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output _____no_output_____ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is 46.36 +/-1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then later had access to an unlimited amount of testdata, we would expect its true testing error to fall close to that region.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black") plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output _____no_output_____ ###Markdown The target variable ranges from close to 0 k\\$ up to 500 k\\$ and, with astandard deviation around 115 k\\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore, the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\\$. However, it would be anissue with a house with a value of 50 k\\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\\$ might be too large to automatically useour model to tag house values without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of these `fit`/`score` procedures. To make it explicit, it ispossible to retrieve these fitted models for each of the splits/folds bypassing the option `return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you only are interested in the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown The framework and why do we need itIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.This, we will use a predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from the100 (k\\$) range to the thousand dollars (k\\$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential generalizationperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output _____no_output_____ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will consistently use the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will consistently use the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will beunstable and wouldn't reflect the "true error rate" we would have observedwith the same model on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive modelby repeating the splitting procedure. It will give several training andtesting errors and thus some **estimate of the variability of themodel generalization performance**.There are different cross-validation strategies, for now we are going tofocus on one called "shuffle-split". At each iteration of this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Keep in mind that the computationalcost increases with `n_splits`.![Cross-validation diagram](../figures/shufflesplit_diagram.png)NoteThis figure shows the particular case of shuffle-split cross-validationstrategy using n_splits=5.For each cross-validation split, the procedure trains a model on all the redsamples and evaluate the score of the model on the blue samples.In this case we will set `n_splits=40`, meaning that wewill train 40 models in total and all of them will be discarded: we justrecord their generalization performance on each variant of the test set.To evaluate the generalization performance of our regressor, we can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)with a[`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipA score is a metric for which higher values mean better results. On thecontrary, an error is a metric for which lower values mean better results.The parameter scoring in cross_validate always expect a function that isa score.To make it easy, all error metrics in scikit-learn, likemean_absolute_error, can be transformed into a score to be used incross_validate. To do so, you need to pass a string of the error metricwith an additional neg_ string at the front to the parameter scoring;for instance scoring="neg_mean_absolute_error". In this case, the negativeof the mean absolute error will be computed which would be equivalent to ascore.Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each cross-validationiteration. Also, we get the test score, which corresponds to the testingerror on each of the splits. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40splits. Therefore, we can show the testing error distribution and thus, havean estimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black", density=True) plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\\$ andranges from 43 k\\$ to 50 k\\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output _____no_output_____ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is46.36 +/- 1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then later had access to an unlimited amount of testdata, we would expect its true testing error to fall close to thatregion.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black", density=True) plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output _____no_output_____ ###Markdown The target variable ranges from close to 0 k\\$ up to 500 k\\$ and, with astandard deviation around 115 k\\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore, the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\\$. However, it would be anissue with a house with a value of 50 k\\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\\$ might be too large to automatically useour model to tag house values without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of these `fit`/`score` procedures. To make it explicit, it ispossible to retrieve these fitted models for each of the splits/folds bypassing the option `return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you only are interested in the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown Cross-validation frameworkIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.This, we will use a predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from the100 (k\\$) range to the thousand dollars (k\\$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential generalizationperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output On average, our regressor makes an error of 0.00 k$ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will consistently use the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will consistently use the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output The training error of our model is 0.00 k$ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output The testing error of our model is 47.28 k$ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will be unstable andwouldn't reflect the "true error rate" we would have observed with the samemodel on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive model byrepeating the splitting procedure. It will give several training and testingerrors and thus some **estimate of the variability of the model generalizationperformance**.There are [different cross-validationstrategies](https://scikit-learn.org/stable/modules/cross_validation.htmlcross-validation-iterators),for now we are going to focus on one called "shuffle-split". At each iterationof this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Keep in mind that the computationalcost increases with `n_splits`.![Cross-validation diagram](../figures/shufflesplit_diagram.png)NoteThis figure shows the particular case of shuffle-split cross-validationstrategy using n_splits=5.For each cross-validation split, the procedure trains a model on all the redsamples and evaluate the score of the model on the blue samples.In this case we will set `n_splits=40`, meaning that we will train 40 modelsin total and all of them will be discarded: we just record theirgeneralization performance on each variant of the test set.To evaluate the generalization performance of our regressor, we can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)with a[`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipA score is a metric for which higher values mean better results. On thecontrary, an error is a metric for which lower values mean better results.The parameter scoring in cross_validate always expect a function that isa score.To make it easy, all error metrics in scikit-learn, likemean_absolute_error, can be transformed into a score to be used incross_validate. To do so, you need to pass a string of the error metricwith an additional neg_ string at the front to the parameter scoring;for instance scoring="neg_mean_absolute_error". In this case, the negativeof the mean absolute error will be computed which would be equivalent to ascore.Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each cross-validationiteration. Also, we get the test score, which corresponds to the testing erroron each of the splits. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40 splits.Therefore, we can show the testing error distribution and thus, have anestimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black") plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\\$ and ranges from43 k\\$ to 50 k\\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output The standard deviation of the testing error is: 1.17 k$ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is 46.36 +/-1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then later had access to an unlimited amount of testdata, we would expect its true testing error to fall close to that region.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black") plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output The standard deviation of the target is: 115.40 k$ ###Markdown The target variable ranges from close to 0 k\\$ up to 500 k\\$ and, with astandard deviation around 115 k\\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore, the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\\$. However, it would be anissue with a house with a value of 50 k\\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\\$ might be too large to automatically useour model to tag house values without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of these `fit`/`score` procedures. To make it explicit, it ispossible to retrieve these fitted models for each of the splits/folds bypassing the option `return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you only are interested in the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown The framework and why do we need itIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown Caution!Here and later, we use the name data and target to be explicit. Inscikit-learn documentation, data is commonly named X and target iscommonly called y. In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.Therefore, we will use predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from thedollar (\\$) range to the thousand dollars (k\\$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential statisticalperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output _____no_output_____ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will use consistently the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will use consistently the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will beunstable and wouldn't reflect the "true error rate" we would have observedwith the same model on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive modelby repeating the splitting procedure. It will give several training andtesting errors and thus some **estimate of the variability of themodel statistical performance**.There are different cross-validation strategies, for now we are going tofocus on one called "shuffle-split". At each iteration of this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Using `n_splits=40` means that wewill train 40 models in total and all of them will be discarded: we justrecord their statistical performance on each variant of the test set.To evaluate the statistical performance of our regressor, we can use`cross_validate` with a `ShuffleSplit` object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipA score is a metric for which higher values mean better results. On thecontrary, an error is a metric for which lower values mean better results.The parameter scoring in cross_validate always expect a function that isa score.To make it easy, all error metrics in scikit-learn, likemean_absolute_error, can be transformed into a score to be used incross_validate. To do so, you need to pass a string of the error metricwith an additional neg_ string at the front to the parameter scoring;for instance scoring="neg_mean_absolute_error". In this case, the negativeof the mean absolute error will be computed which would be equivalent to ascore.Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each round ofcross-validation. Also, we get the test score, which corresponds to thetesting error on each of the split. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40splits. Therefore, we can show the testing error distribution and thus, havean estimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black", density=True) plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\\$ andranges from 43 k\\$ to 50 k\\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output _____no_output_____ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is46.36 +/- 1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then had later access to an unlimited amount of testdata, we would expect its true testing error to fall close to thatregion.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black", density=True) plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output _____no_output_____ ###Markdown The target variable ranges from close to 0 k\\$ up to 500 k\\$ and, with astandard deviation around 115 k\\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\\$. However, it would be anissue with a house with a value of 50 k\\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\\$ might be too large to automatically useour model to tag house value without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of this `fit`/`score`. To make it explicit, it is possibleto retrieve theses fitted models for each of the fold by passing the option`return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you are interested only about the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown The framework and why do we need itIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown Caution!Here and later, we use the name data and target to be explicit. Inscikit-learn, documentation data is commonly named X and target iscommonly called y. In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.Therefore, we will use predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from thedollar ($) range to the thousand dollars (k$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential statisticalperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output _____no_output_____ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will use consistently the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will use consistently the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will beunstable and wouldn't reflect the "true error rate" we would have observedwith the same model on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive modelby repeating the splitting procedure. It will give several training andtesting errors and thus some **estimate of the variability of themodel statistical performance**.There are different cross-validation strategies, for now we are going tofocus on one called "shuffle-split". At each iteration of this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Using `n_splits=40` means that wewill train 40 models in total and all of them will be discarded: we justrecord their statistical performance on each variant of the test set.To evaluate the statistical performance of our regressor, we can use`cross_validate` with a `ShuffleSplit` object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipBy convention, scikit-learn model evaluation tools always use a conventionwhere "higher is better", this explains we usedscoring="neg_mean_absolute_error" (meaning "negative mean absolute error").Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each round ofcross-validation. Also, we get the test score, which corresponds to thetesting error on each of the split. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40splits. Therefore, we can show the testing error distribution and thus, havean estimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black", density=True) plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\\$ andranges from 43 k\\$ to 50 k\\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output _____no_output_____ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is46.36 +/- 1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then had later access to an unlimited amount of testdata, we would expect its true testing error to fall close to thatregion.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black", density=True) plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output _____no_output_____ ###Markdown The target variable ranges from close to 0 k\\$ up to 500 k\\$ and, with astandard deviation around 115 k\\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\\$. However, it would be anissue with a house with a value of 50 k\\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\\$ might be too large to automatically useour model to tag house value without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of this `fit`/`score`. To make it explicit, it is possibleto retrieve theses fitted models for each of the fold by passing the option`return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you are interested only about the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown The framework and why do we need itIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.This, we will use a predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from thedollar (\\$) range to the thousand dollars (k\\$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential statisticalperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output On average, our regressor makes an error of 0.00 k$ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will consistently use the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will consistently use the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output The training error of our model is 0.00 k$ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output The testing error of our model is 47.28 k$ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will beunstable and wouldn't reflect the "true error rate" we would have observedwith the same model on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive modelby repeating the splitting procedure. It will give several training andtesting errors and thus some **estimate of the variability of themodel statistical performance**.There are different cross-validation strategies, for now we are going tofocus on one called "shuffle-split". At each iteration of this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Using `n_splits=40` means that wewill train 40 models in total and all of them will be discarded: we justrecord their statistical performance on each variant of the test set.To evaluate the statistical performance of our regressor, we can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)with a[`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipA score is a metric for which higher values mean better results. On thecontrary, an error is a metric for which lower values mean better results.The parameter scoring in cross_validate always expect a function that isa score.To make it easy, all error metrics in scikit-learn, likemean_absolute_error, can be transformed into a score to be used incross_validate. To do so, you need to pass a string of the error metricwith an additional neg_ string at the front to the parameter scoring;for instance scoring="neg_mean_absolute_error". In this case, the negativeof the mean absolute error will be computed which would be equivalent to ascore.Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each cross-validationiteration. Also, we get the test score, which corresponds to the testingerror on each of the splits. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40splits. Therefore, we can show the testing error distribution and thus, havean estimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black", density=True) plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\\$ andranges from 43 k\\$ to 50 k\\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output The standard deviation of the testing error is: 1.17 k$ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is46.36 +/- 1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then later had access to an unlimited amount of testdata, we would expect its true testing error to fall close to thatregion.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black", density=True) plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output The standard deviation of the target is: 115.40 k$ ###Markdown The target variable ranges from close to 0 k\\$ up to 500 k\\$ and, with astandard deviation around 115 k\\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore, the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\\$. However, it would be anissue with a house with a value of 50 k\\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\\$ might be too large to automatically useour model to tag house values without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of these `fit`/`score` procedures. To make it explicit, it ispossible to retrieve theses fitted models for each of the splits/folds bypassing the option `return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you only are interested in the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____ ###Markdown The framework and why do we need itIn the previous notebooks, we introduce some concepts regarding theevaluation of predictive models. While this section could be slightlyredundant, we intend to go into details into the cross-validation framework.Before we dive in, let's linger on the reasons for always having training andtesting sets. Let's first look at the limitation of using a dataset withoutkeeping any samples out.To illustrate the different concepts, we will use the California housingdataset. ###Code from sklearn.datasets import fetch_california_housing housing = fetch_california_housing(as_frame=True) data, target = housing.data, housing.target ###Output _____no_output_____ ###Markdown In this dataset, the aim is to predict the median value of houses in an areain California. The features collected are based on general real-estate andgeographical information.Therefore, the task to solve is different from the one shown in the previousnotebook. The target to be predicted is a continuous variable and not anymorediscrete. This task is called regression.This, we will use a predictive model specific to regression and not toclassification. ###Code print(housing.DESCR) data.head() ###Output _____no_output_____ ###Markdown To simplify future visualization, let's transform the prices from the100 (k\\$) range to the thousand dollars (k\\$) range. ###Code target *= 100 target.head() ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Training error vs testing errorTo solve this regression task, we will use a decision tree regressor. ###Code from sklearn.tree import DecisionTreeRegressor regressor = DecisionTreeRegressor(random_state=0) regressor.fit(data, target) ###Output _____no_output_____ ###Markdown After training the regressor, we would like to know its potential generalizationperformance once deployed in production. For this purpose, we use the meanabsolute error, which gives us an error in the native unit, i.e. k\\$. ###Code from sklearn.metrics import mean_absolute_error target_predicted = regressor.predict(data) score = mean_absolute_error(target, target_predicted) print(f"On average, our regressor makes an error of {score:.2f} k$") ###Output _____no_output_____ ###Markdown We get perfect prediction with no error. It is too optimistic and almostalways revealing a methodological problem when doing machine learning.Indeed, we trained and predicted on the same dataset. Since our decision treewas fully grown, every sample in the dataset is stored in a leaf node.Therefore, our decision tree fully memorized the dataset given during `fit`and therefore made no error when predicting.This error computed above is called the **empirical error** or **trainingerror**.NoteIn this MOOC, we will consistently use the term "training error".We trained a predictive model to minimize the training error but our aim isto minimize the error on data that has not been seen during training.This error is also called the **generalization error** or the "true"**testing error**.NoteIn this MOOC, we will consistently use the term "testing error".Thus, the most basic evaluation involves:* splitting our dataset into two subsets: a training set and a testing set;* fitting the model on the training set;* estimating the training error on the training set;* estimating the testing error on the testing set.So let's split our dataset. ###Code from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) ###Output _____no_output_____ ###Markdown Then, let's train our model. ###Code regressor.fit(data_train, target_train) ###Output _____no_output_____ ###Markdown Finally, we estimate the different types of errors. Let's start by computingthe training error. ###Code target_predicted = regressor.predict(data_train) score = mean_absolute_error(target_train, target_predicted) print(f"The training error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown We observe the same phenomena as in the previous experiment: our modelmemorized the training set. However, we now compute the testing error. ###Code target_predicted = regressor.predict(data_test) score = mean_absolute_error(target_test, target_predicted) print(f"The testing error of our model is {score:.2f} k$") ###Output _____no_output_____ ###Markdown This testing error is actually about what we would expect from our model ifit was used in a production environment. Stability of the cross-validation estimatesWhen doing a single train-test split we don't give any indication regardingthe robustness of the evaluation of our predictive model: in particular, ifthe test set is small, this estimate of the testing error will be unstable andwouldn't reflect the "true error rate" we would have observed with the samemodel on an unlimited amount of test data.For instance, we could have been lucky when we did our random split of ourlimited dataset and isolated some of the easiest cases to predict in thetesting set just by chance: the estimation of the testing error would beoverly optimistic, in this case.**Cross-validation** allows estimating the robustness of a predictive model byrepeating the splitting procedure. It will give several training and testingerrors and thus some **estimate of the variability of the model generalizationperformance**.There are [different cross-validationstrategies](https://scikit-learn.org/stable/modules/cross_validation.htmlcross-validation-iterators),for now we are going to focus on one called "shuffle-split". At each iterationof this strategy we:- randomly shuffle the order of the samples of a copy of the full dataset;- split the shuffled dataset into a train and a test set;- train a new model on the train set;- evaluate the testing error on the test set.We repeat this procedure `n_splits` times. Keep in mind that the computationalcost increases with `n_splits`.![Cross-validation diagram](../figures/shufflesplit_diagram.png)NoteThis figure shows the particular case of shuffle-split cross-validationstrategy using n_splits=5.For each cross-validation split, the procedure trains a model on all the redsamples and evaluate the score of the model on the blue samples.In this case we will set `n_splits=40`, meaning that we will train 40 modelsin total and all of them will be discarded: we just record theirgeneralization performance on each variant of the test set.To evaluate the generalization performance of our regressor, we can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)with a[`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)object: ###Code from sklearn.model_selection import cross_validate from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=40, test_size=0.3, random_state=0) cv_results = cross_validate( regressor, data, target, cv=cv, scoring="neg_mean_absolute_error") ###Output _____no_output_____ ###Markdown The results `cv_results` are stored into a Python dictionary. We will convertit into a pandas dataframe to ease visualization and manipulation. ###Code import pandas as pd cv_results = pd.DataFrame(cv_results) cv_results.head() ###Output _____no_output_____ ###Markdown TipA score is a metric for which higher values mean better results. On thecontrary, an error is a metric for which lower values mean better results.The parameter scoring in cross_validate always expect a function that isa score.To make it easy, all error metrics in scikit-learn, likemean_absolute_error, can be transformed into a score to be used incross_validate. To do so, you need to pass a string of the error metricwith an additional neg_ string at the front to the parameter scoring;for instance scoring="neg_mean_absolute_error". In this case, the negativeof the mean absolute error will be computed which would be equivalent to ascore.Let us revert the negation to get the actual error: ###Code cv_results["test_error"] = -cv_results["test_score"] ###Output _____no_output_____ ###Markdown Let's check the results reported by the cross-validation. ###Code cv_results.head(10) ###Output _____no_output_____ ###Markdown We get timing information to fit and predict at each cross-validationiteration. Also, we get the test score, which corresponds to the testing erroron each of the splits. ###Code len(cv_results) ###Output _____no_output_____ ###Markdown We get 40 entries in our resulting dataframe because we performed 40 splits.Therefore, we can show the testing error distribution and thus, have anestimate of its variability. ###Code import matplotlib.pyplot as plt cv_results["test_error"].plot.hist(bins=10, edgecolor="black", density=True) plt.xlabel("Mean absolute error (k$)") _ = plt.title("Test error distribution") ###Output _____no_output_____ ###Markdown We observe that the testing error is clustered around 47 k\\$ and ranges from43 k\\$ to 50 k\\$. ###Code print(f"The mean cross-validated testing error is: " f"{cv_results['test_error'].mean():.2f} k$") print(f"The standard deviation of the testing error is: " f"{cv_results['test_error'].std():.2f} k$") ###Output _____no_output_____ ###Markdown Note that the standard deviation is much smaller than the mean: we couldsummarize that our cross-validation estimate of the testing error is 46.36 +/-1.17 k\\$.If we were to train a single model on the full dataset (withoutcross-validation) and then later had access to an unlimited amount of testdata, we would expect its true testing error to fall close to that region.While this information is interesting in itself, it should be contrasted tothe scale of the natural variability of the vector `target` in our dataset.Let us plot the distribution of the target variable: ###Code target.plot.hist(bins=20, edgecolor="black", density=True) plt.xlabel("Median House Value (k$)") _ = plt.title("Target distribution") print(f"The standard deviation of the target is: {target.std():.2f} k$") ###Output _____no_output_____ ###Markdown The target variable ranges from close to 0 k\\$ up to 500 k\\$ and, with astandard deviation around 115 k\\$.We notice that the mean estimate of the testing error obtained bycross-validation is a bit smaller than the natural scale of variation of thetarget variable. Furthermore, the standard deviation of the cross validationestimate of the testing error is even smaller.This is a good start, but not necessarily enough to decide whether thegeneralization performance is good enough to make our prediction useful inpractice.We recall that our model makes, on average, an error around 47 k\\$. With thisinformation and looking at the target distribution, such an error might beacceptable when predicting houses with a 500 k\\$. However, it would be anissue with a house with a value of 50 k\\$. Thus, this indicates that ourmetric (Mean Absolute Error) is not ideal.We might instead choose a metric relative to the target value to predict: themean absolute percentage error would have been a much better choice.But in all cases, an error of 47 k\\$ might be too large to automatically useour model to tag house values without expert supervision. More detail regarding `cross_validate`During cross-validation, many models are trained and evaluated. Indeed, thenumber of elements in each array of the output of `cross_validate` is aresult from one of these `fit`/`score` procedures. To make it explicit, it ispossible to retrieve these fitted models for each of the splits/folds bypassing the option `return_estimator=True` in `cross_validate`. ###Code cv_results = cross_validate(regressor, data, target, return_estimator=True) cv_results cv_results["estimator"] ###Output _____no_output_____ ###Markdown The five decision tree regressors corresponds to the five fitted decisiontrees on the different folds. Having access to these regressors is handybecause it allows to inspect the internal fitted parameters of theseregressors.In the case where you only are interested in the test score, scikit-learnprovide a `cross_val_score` function. It is identical to calling the`cross_validate` function and to select the `test_score` only (as weextensively did in the previous notebooks). ###Code from sklearn.model_selection import cross_val_score scores = cross_val_score(regressor, data, target) scores ###Output _____no_output_____
experiments/check-results.ipynb
###Markdown Vizualization of the resultsIn this notebook we just visualize and format the pickle file containing experiments results ###Code import pickle as pkl import pandas as pd pd.set_option("display.max_rows", 999) def print_results(filename, metrics=None): with open(filename, "rb") as f: results = pkl.load(f) print("commit number %s: datetime: %s" % (results["commit"], results["datetime"])) if metrics is None: metrics = ["avg_prec_w", "avg_prec_w_train", "log_loss"] df = results["results"] return df def print_results(filename, metrics=None): with open(filename, "rb") as f: results = pkl.load(f) print("commit number %s: datetime: %s" % (results["commit"], results["datetime"])) if metrics is None: metrics = ["avg_prec_w", "avg_prec_w_train", "log_loss"] df = results["results"] return ( df[["dataset", "classifier_title", *metrics, "repeat"]] .groupby(["dataset", "classifier_title", "repeat"]) # .agg(["mean", "std"]) .agg(["mean"]) .reset_index() .loc[:, ["dataset", "classifier_title", *metrics]] .pivot(index="dataset", columns="classifier_title") .style .format("{:.2f}") .background_gradient(axis="index") ) filename = "../benchmarks_2021-05-11-16:56:05.pickle" print_results(filename) ###Output commit number 14fe539: datetime: 2021-05-11-16:56:05 ###Markdown Resultats n_estimators=100, sans aggregation et max_features=None, dirichlet=0.0 ###Code filename = "../benchmarks_2021-04-19-15:25:42.pickle" print_results(filename) ###Output commit number 6ad6dfc: datetime: 2021-04-19-15:25:42 ###Markdown Resultats n_estimators=100, sans aggregation et max_features="auto", dirichlet=0.0 ###Code filename = "../benchmarks_2021-04-19-15:29:20.pickle" print_results(filename, metrics=["avg_prec_w", "fit_time"]) ###Output commit number 6ad6dfc: datetime: 2021-04-19-15:29:20 ###Markdown Resultats n_estimators=100, sans aggregation et max_features="auto", dirichlet=0.5 ###Code filename = "../benchmarks_2021-04-19-15:33:27.pickle" print_results(filename, metrics=["avg_prec_w", "fit_time"]) ###Output commit number 6ad6dfc: datetime: 2021-04-19-15:33:27 ###Markdown Resultats n_estimators=100, avec aggregation et max_features="auto", dirichlet=0.5 ###Code filename = "../benchmarks_2021-04-19-15:37:09.pickle" print_results(filename, metrics=["avg_prec_w", "fit_time"]) import sys import subprocess from time import time from datetime import datetime import logging import pickle as pkl import numpy as np import pandas as pd from sklearn.linear_model import LogisticRegression from sklearn.metrics import ( roc_auc_score, average_precision_score, log_loss, accuracy_score, ) from sklearn.preprocessing import LabelBinarizer from sklearn.ensemble import RandomForestClassifier sys.path.extend([".", ".."]) from wildwood.dataset import loaders_small_classification, load_churn from wildwood.forest import ForestClassifier from wildwood.dataset import ( load_adult, load_bank, load_breastcancer, load_car, load_cardio, load_churn, load_default_cb, load_letter, load_satimage, load_sensorless, load_spambase, ) dataset = load_bank() clf = ForestClassifier( n_estimators=1, n_jobs=1, class_weight="balanced", random_state=42, aggregation=False, max_features=None, dirichlet=0.0 ) dataset.one_hot_encode = True dataset.standardize = False dataset.drop = None X_train, X_test, y_train, y_test = dataset.extract(random_state=42) clf.fit(X_train, y_train) from bokeh.plotting import show, output_notebook from wildwood.plot import plot_tree output_notebook() fig = plot_tree(clf, height=900, width=900) show(fig) dataset.one_hot_encode = False dataset.standardize = False dataset.drop = None X_train, X_test, y_train, y_test = dataset.extract(random_state=42) clf = ForestClassifier( n_estimators=1, n_jobs=1, class_weight="balanced", random_state=42, aggregation=False, max_features=None, dirichlet=0.0, categorical_features=dataset.categorical_features_ ) clf.fit(X_train, y_train) fig = plot_tree(clf, height=900, width=900) show(fig) df = clf.get_nodes(0) pd.set_option("display.max_columns", 99) df.loc[338:342] clf.trees[0]._tree. partition_train = clf.trees[0]._tree_context.partition_train[19512:19517] partition_valid = clf.trees[0]._tree_context.partition_valid[11271:11273] pd.DataFrame(X_train[partition_train, :]) pd.DataFrame(X_train[partition_valid, :]) clf.trees[0]._tree_context.partition_train ###Output _____no_output_____
research/Gunosy_classifier_RF.ipynb
###Markdown RandomForest Algorithm ###Code import matplotlib.pyplot as plt import seaborn as sns import itertools from scipy import interp from sklearn.model_selection import train_test_split, cross_val_score from sklearn.metrics import accuracy_score from sklearn.model_selection import cross_validate from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import TimeSeriesSplit, GridSearchCV, RandomizedSearchCV from sklearn.metrics import confusion_matrix from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import f1_score from sklearn import metrics from sklearn.metrics import roc_curve, auc def get_RandSearchCV(X_train, y_train, X_test, y_test, scoring): from sklearn.model_selection import TimeSeriesSplit from datetime import datetime as dt st_t = dt.now() # Numer of trees are used n_estimators = [5, 10, 50, 100, 150, 200, 250, 300] #n_estimators = list(np.arange(100,1000,50)) #n_estimators = [1000] # Maximum depth of each tree max_depth = [5, 10, 25, 50, 75, 100] # Minimum number of samples per leaf min_samples_leaf = [1, 2, 4, 8, 10] # Minimum number of samples to split a node min_samples_split = [2, 4, 6, 8, 10] # Maximum numeber of features to consider for making splits max_features = ["auto", "sqrt", "log2", None] hyperparameter = {'n_estimators': n_estimators, 'max_depth': max_depth, 'min_samples_leaf': min_samples_leaf, 'min_samples_split': min_samples_split, 'max_features': max_features} cv_timeSeries = TimeSeriesSplit(n_splits=5).split(X_train) base_model_rf = RandomForestClassifier(criterion="gini", random_state=42) # Run randomzed search n_iter_search = 30 rsearch_cv = RandomizedSearchCV(estimator=base_model_rf, random_state=42, param_distributions=hyperparameter, n_iter=n_iter_search, cv=cv_timeSeries, scoring=scoring, n_jobs=-1) rsearch_cv.fit(X_train, y_train) #f = open("output.txt", "a") print("Best estimator obtained from CV data: \n", rsearch_cv.best_estimator_) print("Best Score: ", rsearch_cv.best_score_) return rsearch_cv def evaluate_multiclass(best_clf, X_train, y_train, X_test, y_test, model="Random Forest", num_class=3): print("-"*100) print("~~~~~~~~~~~~~~~~~~ PERFORMANCE EVALUATION ~~~~~~~~~~~~~~~~~~~~~~~~") print("Detailed report for the {} algorithm".format(model)) best_clf.fit(X_train, y_train) y_pred = best_clf.predict(X_test) y_pred_prob = best_clf.predict_proba(X_test) test_accuracy = accuracy_score(y_test, y_pred, normalize=True) * 100 points = accuracy_score(y_test, y_pred, normalize=False) print("The number of accurate predictions out of {} data points on unseen data is {}".format( X_test.shape[0], points)) print("Accuracy of the {} model on unseen data is {}".format( model, np.round(test_accuracy, 2))) print("Precision of the {} model on unseen data is {}".format( model, np.round(metrics.precision_score(y_test, y_pred, average="macro"), 4))) print("Recall of the {} model on unseen data is {}".format( model, np.round(metrics.recall_score(y_test, y_pred, average="macro"), 4))) print("F1 score of the {} model on unseen data is {}".format( model, np.round(metrics.f1_score(y_test, y_pred, average="macro"), 4))) print("\nClassification report for {} model: \n".format(model)) print(metrics.classification_report(y_test, y_pred)) plt.figure(figsize=(15,15)) cnf_matrix = metrics.confusion_matrix(y_test, y_pred) cnf_matrix_norm = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis] print("\nThe Confusion Matrix: \n") print(cnf_matrix) cmap = plt.cm.Blues sns.heatmap(cnf_matrix_norm, annot=True, cmap=cmap, fmt=".2f", annot_kws={"size":15}) plt.title("The Normalized Confusion Matrix", fontsize=20) plt.ylabel("True label", fontsize=15) plt.xlabel("Predicted label", fontsize=15) plt.show() print("\nROC curve and AUC") y_pred = best_clf.predict(X_test) y_pred_prob = best_clf.predict_proba(X_test) y_test_cat = np.array(pd.get_dummies(y_test)) fpr = dict() tpr = dict() roc_auc = dict() for i in range(num_class): fpr[i], tpr[i], _ = metrics.roc_curve(y_test_cat[:,i], y_pred_prob[:,i]) roc_auc[i] = metrics.auc(fpr[i], tpr[i]) all_fpr = np.unique(np.concatenate([fpr[i] for i in range(num_class)])) mean_tpr = np.zeros_like(all_fpr) for i in range(num_class): mean_tpr += interp(all_fpr, fpr[i], tpr[i]) mean_tpr /= num_class fpr["macro"] = all_fpr tpr["macro"] = mean_tpr roc_auc["macro"] = metrics.auc(fpr["macro"], tpr["macro"]) plt.figure(figsize=(15,15)) plt.plot(fpr["macro"], tpr["macro"], label = "macro-average ROC curve with AUC = {} - Accuracy = {}%".format( round(roc_auc["macro"], 2), round(test_accuracy, 2)), color = "navy", linestyle=":", linewidth=4) #colors = cycle(["red", "orange", "blue", "pink", "green"]) colors = sns.color_palette() for i, color in zip(range(num_class), colors): plt.plot(fpr[i], tpr[i], color=color, lw=2, label = "ROC curve of class {0} (AUC = {1:0.2f})".format(i, roc_auc[i])) plt.plot([0,1], [0,1], "k--", lw=3, color='red') plt.title("ROC-AUC for {} model".format(model), fontsize=20) plt.xlabel("False Positive Rate", fontsize=15) plt.ylabel("True Positive Rate", fontsize=15) plt.legend(loc="lower right") plt.show() return y_pred, y_pred_prob ###Output _____no_output_____ ###Markdown RandomForest Algorithm for IF-IDF ###Code vectorizer = TfidfVectorizer(use_idf = True, token_pattern=u'(?u)\\b\\w+\\b') X = vectorizer.fit_transform(df.wakati_text.values) X = X.toarray() y = df["Category"].apply(lambda x: 0 if x == "エンタメ" else 1 if x == "スポーツ" else 2 if x == "グルメ" else 3 if x == "海外" else 4 if x == "おもしろ" else 5 if x == "国内" else 6 if x == "IT・科学" else 7) X.shape X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) print("Starting Cross Validation steps...") rsearch_cv = get_RandSearchCV(X_train, y_train, X_test, y_test, "f1_macro") random_forest = rsearch_cv.best_estimator_ random_forest.fit(X_train, y_train) y_pred, y_pred_prob = evaluate_multiclass(random_forest, X_train, y_train, X_test, y_test, model="Random Forest", num_class=8) import json import joblib joblib.dump(random_forest, "./rf_classifier.joblib", compress=True) ###Output _____no_output_____ ###Markdown RandomForest Algorithm for Word2Vec ###Code def get_doc_swem_max_vector(doc, model): words = doc.split() word_cnt = 0 vector_size = model.vector_size doc_vector = np.zeros((len(words), vector_size)) for i, word in enumerate(words): try: word_vector = model.wv[word] except KeyError: word_vector = np.zeros(vector_size) doc_vector[i, :] = word_vector doc_vector = np.max(doc_vector, axis=0) return doc_vector def get_doc_mean_vector(doc, model): doc_vector = np.zeros(model.vector_size) words = doc.split() word_cnt = 0 for word in words: try: word_vector = model.wv[word] doc_vector += word_vector word_cnt += 1 except KeyError: pass doc_vector /= word_cnt return doc_vector corpus = [doc.split() for doc in df.wakati_text.values] model_w2v = word2vec.Word2Vec(corpus, size=1000, min_count=20, window=10) X = np.zeros((len(df), model_w2v.wv.vector_size)) for i, doc in tqdm_notebook(enumerate(df.wakati_text.values)): X[i, :] = get_doc_mean_vector(doc, model_w2v) y = df["Category"].apply(lambda x: 0 if x == "エンタメ" else 1 if x == "スポーツ" else 2 if x == "グルメ" else 3 if x == "海外" else 4 if x == "おもしろ" else 5 if x == "国内" else 6 if x == "IT・科学" else 7) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) print("Starting Cross Validation steps...") rsearch_cv = get_RandSearchCV(X_train, y_train, X_test, y_test, "f1_macro") random_forest = rsearch_cv.best_estimator_ random_forest.fit(X_train, y_train) y_pred, y_pred_prob = evaluate_multiclass(random_forest, X_train, y_train, X_test, y_test, model="Random Forest", num_class=8) ###Output ---------------------------------------------------------------------------------------------------- ~~~~~~~~~~~~~~~~~~ PERFORMANCE EVALUATION ~~~~~~~~~~~~~~~~~~~~~~~~ Detailed report for the Random Forest algorithm The number of accurate predictions out of 912 data points on unseen data is 898 Accuracy of the Random Forest model on unseen data is 98.46 Precision of the Random Forest model on unseen data is 0.9852 Recall of the Random Forest model on unseen data is 0.9842 F1 score of the Random Forest model on unseen data is 0.9846 Classification report for Random Forest model: precision recall f1-score support 0 0.96 1.00 0.98 127 1 1.00 0.98 0.99 124 2 1.00 0.97 0.99 112 3 0.98 1.00 0.99 114 4 1.00 1.00 1.00 110 5 0.97 0.97 0.97 116 6 0.99 0.99 0.99 104 7 0.98 0.95 0.97 105 accuracy 0.98 912 macro avg 0.99 0.98 0.98 912 weighted avg 0.98 0.98 0.98 912 The Confusion Matrix: [[127 0 0 0 0 0 0 0] [ 2 122 0 0 0 0 0 0] [ 1 0 109 0 0 0 0 2] [ 0 0 0 114 0 0 0 0] [ 0 0 0 0 110 0 0 0] [ 2 0 0 1 0 113 0 0] [ 0 0 0 0 0 1 103 0] [ 0 0 0 1 0 3 1 100]]
Classification/logistic-regression.ipynb
###Markdown Explore and pre-process data ###Code df = pd.read_csv("ChurnData.csv") df.head() df = df[['tenure', 'age', 'address', 'income', 'ed', 'employ', 'equip', 'callcard', 'wireless','churn']] df['churn'] = df['churn'].astype('int') df.head() print(df.shape) print(df.columns) X1=df[['tenure', 'age', 'address', 'income', 'ed', 'employ', 'equip','callcard', 'wireless']].values X1[0:5] X2=np.asarray(df[['tenure', 'age', 'address', 'income', 'ed', 'employ', 'equip','callcard', 'wireless']]) X2[0:5] X1==X2 y = np.asarray(df['churn']) y [0:5] #Normalize data from sklearn import preprocessing X = preprocessing.StandardScaler().fit(X1).transform(X1) X[0:5] ###Output _____no_output_____ ###Markdown Modeling ###Code from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4) print ('Train set:', X_train.shape, y_train.shape) print ('Test set:', X_test.shape, y_test.shape) from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix LR = LogisticRegression(C=0.01, solver='liblinear').fit(X_train,y_train) LR yhat = LR.predict(X_test) yhat yhat_prob = LR.predict_proba(X_test) yhat_prob from sklearn.metrics import jaccard_score print(jaccard_score(y_test, yhat)) print(jaccard_similarity_score(y_test, yhat)) from sklearn.metrics import classification_report, confusion_matrix import itertools def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') print(confusion_matrix(y_test, yhat, labels=[1,0])) # Compute confusion matrix cnf_matrix = confusion_matrix(y_test, yhat, labels=[1,0]) np.set_printoptions(precision=2) # Plot non-normalized confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=['churn=1','churn=0'],normalize= False, title='Confusion matrix') print (classification_report(y_test, yhat)) from sklearn.metrics import log_loss log_loss(y_test, yhat_prob) ###Output _____no_output_____
notebooks/sarcasm_on_reddit_sklearn_models.ipynb
###Markdown Sklearn models for sarcasm on Reddit data ###Code import json import os import pickle import random import warnings import matplotlib.pyplot as plt from scipy.sparse import hstack from scipy.sparse.csr import csr_matrix from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split, KFold, GridSearchCV, cross_val_score from sklearn.naive_bayes import BernoulliNB from sklearn.utils import shuffle from joblib import dump, load from xgboost import XGBClassifier from sarcsdet.configs.sklearn_models_config import * from sarcsdet.configs.sklearn_models_grid_search_params import * from sarcsdet.models.count_model_metrics import * warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown Get data ###Code data_path = '../data/Sarcasm_on_Reddit' df = pd.read_pickle(os.path.join(data_path, 'rus-train-balanced-sarcasm-ling_feat.pkl')) # split data to train and test train_df, test_df = train_test_split(df, test_size=0.3, random_state=8) ###Output _____no_output_____ ###Markdown Random ###Code y_test = test_df.label y_pred = [random.choice([0, 1]) for y in y_test] y_pred_prob = [random.random() for y in y_test] show_test_classification_metrics(y_test, y_pred, y_pred_prob) ###Output F1: 0.50978 PREC: 0.51898 PR-AUC: 0.51864 ROC-AUC: 0.49927 ------------------------------------------------------- precision recall f1-score support 0 0.48 0.50 0.49 119653 1 0.52 0.50 0.51 129068 accuracy 0.50 248721 macro avg 0.50 0.50 0.50 248721 weighted avg 0.50 0.50 0.50 248721 ------------------------------------------------------- ###Markdown Train Sklearn models ###Code result_path = '../results/reddit' unsorted_scores = [] for filename in os.listdir(result_path): if filename.endswith('.json'): with open(os.path.join(result_path, filename)) as f: unsorted_scores.append(json.loads(f.read())) scores_df = pd.io.json.json_normalize(unsorted_scores) scores_df.rename(columns={ 'results.precision': 'precision', 'results.recall': 'recall', 'results.F1': 'F1', 'results.PR AUC': 'PR_AUC', 'results.ROC AUC': 'ROC_AUC'}, inplace=True ) scores_df.drop(columns=['seed', 'test samples'], axis=1, inplace=True) scores_df = scores_df[scores_df['recall'] > 0.5] scores_df = scores_df[scores_df['precision'] > 0.5] scores_df = scores_df[scores_df['F1'] > 0.5] scores_df = scores_df[scores_df['ROC_AUC'] > 0.5] pd.set_option('display.max_colwidth', df.shape[0] + 1) scores_df.sort_values(by=['precision'], ascending=False).head(15) ###Output _____no_output_____ ###Markdown Get parameters for best Sklearn model ###Code tfidf = TfidfVectorizer( ngram_range=(1, 3), max_features=50000, min_df=2 ) X = tfidf.fit_transform(train_df.rus_comment) y = train_df.label.values current_extra_features = [ 'funny_mark', 'interjections', 'exclamation', 'question', 'quotes', 'dotes' ] extra_features_data = csr_matrix(train_df[current_extra_features].values.astype(np.float)) X = hstack([X, extra_features_data], format='csr') cv = KFold(n_splits=5, shuffle=True) clf = LogisticRegression(**default_logit_params_rus) # --> lr_grid # clf = XGBClassifier(**default_xgb_params_rus) # --> xgb_grid # clf = BernoulliNB(**default_bayes_params_rus) # --> nb_grid scoring = { 'PREC': 'precision', 'PR_AUC': 'average_precision', 'AUC': 'roc_auc', 'F1': 'f1_weighted' } gs = GridSearchCV( clf, lr_grid, scoring=scoring, cv=cv, refit='AUC', verbose=10, n_jobs=6 ) search = gs.fit(X, y) best_estimator = gs.best_estimator_ best_estimator get_best_model_metrics(X, y, cv, best_estimator) X_test = tfidf.transform(test_df.rus_comment) test_extra_features_data = csr_matrix(test_df[current_extra_features].values.astype(np.float)) X_test = hstack([X_test, test_extra_features_data], format='csr') probas = best_estimator.predict_proba(X_test) show_test_classification_metrics( test_df.label.values, best_estimator.predict(X_test), probas[:, 1], X_test, best_estimator, probas, ) dump(tfidf, '../data/Models/reddit/tfidf.joblib') dump(best_estimator, '../data/Models/reddit/LogisticRegression.joblib') ###Output _____no_output_____
notebooks/PKL_model.ipynb
###Markdown It's then necessary to check if Acromine produced the correct results. We must fix errors manually ###Code top = miners['PKL'].top() top longforms0 = miners['PKL'].get_longforms() list(enumerate(longforms0)) longforms0 = [(longform, score) for i, (longform, score) in enumerate(longforms0) if i not in [2]] list(enumerate(top)) longforms0.extend((longform, score) for i, (longform, score) in enumerate(top) if i in [10]) longforms0.append(('pinus koraiensis leaf', 1)) longforms = longforms0 longforms.sort(key=lambda x: -x[1]) longforms, scores = zip(*longforms0) longforms grounding_map = {} names = {} for longform in longforms: grounding = gilda_ground(longform) if grounding[0]: grounding_map[longform] = f'{grounding[0]}:{grounding[1]}' names[grounding_map[longform]] = grounding[2] grounding_map names grounding_map, names, pos_labels = ground_with_gui(longforms, scores, grounding_map=grounding_map, names=names) result = (grounding_map, names, pos_labels) result grounding_map, names, pos_labels = ({'paxillin kinase linker': 'HGNC:4273', 'pickle': 'UP:Q9S775', 'pinus koraiensis leaf': 'ungrounded', 'protein kinase like': 'ungrounded'}, {'HGNC:4273': 'GIT2', 'UP:Q9S775': 'CHD3-type chromatin-remodeling factor PICKLE'}, ['HGNC:4273', 'UP:Q9S775']) names['HGNC:9020'] = 'PKLR' pos_labels.append('HGNC:9020') grounding_dict = {'PKL': grounding_map} classifier = AdeftClassifier('PKL', pos_labels=pos_labels) param_grid = {'C': [100.0], 'max_features': [10000]} labeler = AdeftLabeler(grounding_dict) corpus = labeler.build_from_texts(shortform_texts) corpus.extend(entrez_texts) texts, labels = zip(*corpus) classifier.cv(texts, labels, param_grid, cv=5, n_jobs=8) classifier.stats disamb = AdeftDisambiguator(classifier, grounding_dict, names) d = disamb.disambiguate(shortform_texts) a = [text for pred, text in zip(d, shortform_texts)if pred[0] == 'HGNC:9020'] a[0] disamb.dump('PKL', '../results') from adeft.disambiguate import load_disambiguator, load_disambiguator_directly disamb.classifier.training_set_digest model_to_s3(disamb) d.disambiguate(texts[0]) print(d.info()) a = load_disambiguator('AR') a.disambiguate('Androgen') logit = d.classifier.estimator.named_steps['logit'] logit.classes_ model_to_s3(disamb) classifier.feature_importances()['FPLX:RAC'] d = load_disambiguator('ALK', '../results') d.info() print(d.info()) model_to_s3(d) d = load_disambiguator('TAK', '../results') print(d.info()) model_to_s3(d) from adeft import available_shortforms print(d.info()) d.classifier.feature_importances() from adeft import __version__ __version__ from adeft.disambiguate import load_disambiguator_directly d = load_disambiguator_directly('../results/TEK/') print(d.info()) model_to_s3(d) d.grounding_dict !python -m adeft.download --update from adeft import available_shortforms len(available_shortforms) available_shortforms 'TEC' in available_shortforms 'TECs' in available_shortforms !python -m adeft.download --update !python -m adeft.download --update ###Output 100% [......................................................] 1181008 / 1181008
.ipynb_checkpoints/7. Function Decorators and Closures-checkpoint.ipynb
###Markdown Decorators 101 ###Code def deco(func): def inner(): print('running inner()') return inner @deco def target(): print('running target()') target() ###Output running inner() ###Markdown When Python Executes Decorators ###Code registry= [] def register(func): print('running register(%s)' % func) registry.append(func) return func @register def f1(): print('running f1()') @register def f2(): print('running f2()') def f3(): print('running f3()') def main(): print('running main()') print('registry ->', registry) f1() f2() f3() main() ###Output running main() registry -> [<function f1 at 0x000000F990EB1730>, <function f2 at 0x000000F990EB1F28>] running f1() running f2() running f3() ###Markdown Decorator-Enhanced Strategy Pattern ###Code promos = [] def promotion(promo_func): promos.append(promo_func) return promo_func @promotion def fidelity(order): """5% discount for customers with 1000 or more fidelity points""" return order.total() * .05 if order.customer.fidelity >= 1000 else 0 @promotion def bulk_item(order): """10% discount for each LineItem with 20 or more units""" discount = 0 for item in order.cart: if item.quantity >= 20: discount += item.total() * .1 return discount @promotion def large_order(order): """7% discount for oders with 10 or more distinct items""" distinct_items = {item.product for item in order.cart} if len(distinct_items) >= 10: return order.total() * .07 return 0 def best_promo(order): """Select best discount available""" return max(promo(order) for promo in promos) ###Output _____no_output_____ ###Markdown Variable Scope Rules ###Code def f1(a): print(a) print(b) f1(3) b=6 f1(3) def f2(a): print(a) print(b) b=9 f2(3) def f3(a): global b print(a) print(b) b=9 f3(3) ###Output 3 6 ###Markdown Closures ###Code class Averager(): def __init__(self): self.series = [] def __call__(self, new_value): self.series.append(new_value) total = sum(self.series) return total/len(self.series) avg = Averager() avg(10) avg(11) avg(12) def make_averager(): series = [] def averager(new_value): series.append(new_value) total = sum(series) return total/len(series) return averager avg = make_averager() avg(10) avg(11) avg(12) avg.__code__.co_varnames avg.__code__.co_freevars avg.__closure__ avg.__closure__[0].cell_contents ###Output _____no_output_____ ###Markdown The nonlocal Declaration ###Code def make_averager(): count = 0 total = 0 def averager(new_value): nonlocal count, total count += 1 total += new_value return total / count return averager avg = make_averager() avg(10) ###Output _____no_output_____ ###Markdown Implementing a Simple Decorator ###Code import time def clock(func): def clocked(*args): t0 = time.perf_counter() result = func(*args) elapsed = time.perf_counter() - t0 name = func.__name__ arg_str = ', '.join(repr(arg) for arg in args) print('[%0.8fs] %s(%s) -> %r' % (elapsed, name, arg_str, result)) return result return clocked @clock def snooze(seconds): time.sleep(seconds) @clock def factorial(n): return 1 if n < 2 else n*factorial(n-1) print('*' * 40, 'Calling snooze(.123)') snooze(.123) print('*' * 40, 'Calling factorial(6)') print('6! =', factorial(6)) factorial.__name__ import functools def clock(func): @functools.wraps(func) def clocked(*args, **kwargs): t0 = time.time() result = func(*args, **kwargs) elapsed = time.time() - t0 name = func.__name__ arg_lst = [] if args: arg_lst.append(', '.join(repr(arg) for arg in args)) if kwargs: pairs = ['%s=%r' % (k, w) for k, w in sorted(kwargs.items())] arg_lst.append(', '.join(pairs)) arg_str = ', '.join(arg_lst) print('[%0.8fs] %s(%s) -> %r' % (elapsed, name, arg_str, result)) return result return clocked @clock def snooze(seconds): time.sleep(seconds) @clock def factorial(n): return 1 if n < 2 else n*factorial(n-1) print('*' * 40, 'Calling snooze(.123)') snooze(.123) print('*' * 40, 'Calling factorial(6)') print('6! =', factorial(6)) factorial.__name__ ###Output _____no_output_____ ###Markdown Decorators in the Standard Library Memoization with functools.lru_cache ###Code from clockdeco import clock @clock def fibonacci(n): if n < 2: return n return fibonacci(n-2) + fibonacci(n-1) print(fibonacci(6)) import functools @functools.lru_cache() @clock def fibonacci(n): if n < 2: return n return fibonacci(n-2) + fibonacci(n-1) print(fibonacci(6)) ###Output [0.00000000s] fibonacci(0) -> 0 [0.00000000s] fibonacci(1) -> 1 [0.00000000s] fibonacci(2) -> 1 [0.00000000s] fibonacci(3) -> 2 [0.00000000s] fibonacci(4) -> 3 [0.00000000s] fibonacci(5) -> 5 [0.00100279s] fibonacci(6) -> 8 8 ###Markdown Generic Functions with Single Dispatch ###Code import html def htmlize(obj): content = html.escape(repr(obj)) return '<pre>{}</pre>'.format(content) htmlize({1, 2, 3}) htmlize(abs) htmlize('Heimlich & co.\n- a game') htmlize(42) print(htmlize(['alpha', 66, {3, 2, 1}])) from functools import singledispatch from collections import abc import numbers import html @singledispatch def htmlize(obj): content = html.escape(repr(obj)) return '<pre>{}</pre>'.format(content) @htmlize.register(str) def _(text): content = html.escape(text).replace('\n', '<br>\n') return '<p>{0}</p>'.format(content) @htmlize.register(numbers.Integral) def _(n): return '<pre>{0} (0x{0:x})</pre>'.format(n) @htmlize.register(tuple) @htmlize.register(abc.MutableSequence) def _(seq): inner = '</li>\n<li>'.join(htmlize(item) for item in seq) return '<ul>\n<li>' + inner + '</li>\n</ul>' htmlize({1, 2, 3}) htmlize(abs) ###Output _____no_output_____
Aprendizaje Automatico - Actividad 1.ipynb
###Markdown Aprendizaje Automatico Actividad 1 Ivan J. Zepeda Gonzalez El proceso empleado sera: generar el dataframe formatearlo y nombrar columnas gestionar los valores missing/"?" generar un set de entrenamiento y prueba correr el naive bayes en gaussian, multinomial y burnolli? mostrar las metricas generar una curva ROC ###Code import csv import sklearn from sklearn.naive_bayes import MultinomialNB from sklearn.naive_bayes import GaussianNB from sklearn.naive_bayes import BernoulliNB from sklearn import preprocessing from sklearn.impute import MissingIndicator from sklearn.model_selection import train_test_split from sklearn import metrics from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score import pandas as pd import numpy as np import matplotlib.pyplot as plt #new_doc = open("house-votes-84.data").read() #print(new_doc) columns= ['Class Name','Handicap child','Water project','Adoption','Physician fee','El Salvador','Religious group', 'Anti satellite','Aid to Nicaragua','MX misile','Immigration','Synfuels','Education spending','Superfunds', 'Crime','Exports','Export admin to South Africa'] df=pd.read_csv("house-votes-84.data")#, names=columns df.columns=columns df.columns = [c.replace(' ', '_') for c in df.columns] df.head() ###Output _____no_output_____ ###Markdown Este paso permite tratar adecuadamente [ o el inicio para] los missing values,y que el "?" no se considere como un valor. Se puede obserar esto en el valor unique en describe(), se reduce a solo 2 opciones.Decidi hacer la sustitucion de forma manual\* Hay una anotacion mas abajo, en la variable **'dummies'** ###Code dummies=pd.get_dummies(df.Class_Name) dummies.head(3) #Podria sustituir estos valores dentro del dataframe para tener binarios en lugar de un valor string de classname. #aunque como class_name es el target, no se si convenga hacer eso: #Manenterlo en variables binarias, o variable numerica? df.replace({"?":np.NaN}, inplace=True) df.replace({"y":1}, inplace=True) df.replace({"n":0}, inplace=True) df.replace({"democrat":0},inplace=True) # df.replace({"republican":1},inplace=True) # df.head() #Se puede hacer un mapa de los indices que son faltantes [missing], con true cuando son faltantes, y flase, cuandocontienen valor. # para tratar esta informacion, se ha transformado el "?" en un NaN indicator = MissingIndicator(missing_values=np.NaN) indicator = indicator.fit_transform(df) indicator = pd.DataFrame(indicator) #print(indicator) df.describe() ###Output _____no_output_____ ###Markdown Se empieza a dar formato y asignar variables para el entrenamiento ###Code target = df.Class_Name inputs=df #.drop("Class_Name",axis='columns')# <- se usa para remover columnas que no sean tan 'dependientes' en el analisis y liberar procesamiento. #Verificar que los valores sean numericos. #Regresa las columnas con estos resultados, incluye los NaN inputs.columns[inputs.isna().any()] #Un 'approach' es hacer un mean para llenar el valor NaN #inputs.Columna = inputs.Columna.fillna(inputs.Columna.mean()) inputs.Handicap_child = inputs.Handicap_child.fillna(inputs.Handicap_child.mean()) inputs.Water_project = inputs.Water_project.fillna(inputs.Water_project.mean()) inputs.Adoption = inputs.Adoption.fillna(inputs.Adoption.mean()) inputs.Physician_fee = inputs.Physician_fee.fillna(inputs.Physician_fee.mean()) inputs.El_Salvador = inputs.El_Salvador.fillna(inputs.El_Salvador.mean()) inputs.Religious_group = inputs.Religious_group.fillna(inputs.Religious_group.mean()) inputs.Anti_satellite = inputs.Anti_satellite.fillna(inputs.Anti_satellite.mean()) inputs.Aid_to_Nicaragua = inputs.Aid_to_Nicaragua.fillna(inputs.Aid_to_Nicaragua.mean()) inputs.MX_misile = inputs.MX_misile.fillna(inputs.MX_misile.mean()) inputs.Immigration = inputs.Immigration.fillna(inputs.Immigration.mean()) inputs.Synfuels = inputs.Synfuels.fillna(inputs.Synfuels.mean()) inputs.Education_spending = inputs.Education_spending.fillna(inputs.Education_spending.mean()) inputs.Superfunds = inputs.Superfunds.fillna(inputs.Superfunds.mean()) inputs.Crime = inputs.Crime.fillna(inputs.Crime.mean()) inputs.Exports = inputs.Exports.fillna(inputs.Exports.mean()) inputs.Export_admin_to_South_Africa = inputs.Export_admin_to_South_Africa.fillna(inputs.Export_admin_to_South_Africa.mean()) #podria hacer un 'floor' para hacerlo 0 o 1. No se si esto influye al hacer un entrenamiento inputs.head() ###Output _____no_output_____ ###Markdown Se generan los conjuntos y sets de entrenamiento y prueba. Estos se usan para modelar NB Gausiano, Bernoulli, y Multinomial.Tambien se muestra el score de cada uno ###Code X_train, X_test, y_train, y_test = train_test_split(inputs,target,test_size=0.2) #dividing the train set into 20 80 ratio, is the size of 0.2 print(len(X_train)) print(len(X_test)) print(len(inputs)) GaussianModel=GaussianNB() GaussianModel.fit(X_train,y_train) GaussianModel.score(X_test,y_test)#Return the mean accuracy on the given test data and labels. X_test[:10] BernoulliModel=BernoulliNB() BernoulliModel.fit(X_train,y_train) BernoulliModel.score(X_test,y_test) X_test[:10] MultinomialModel=MultinomialNB() MultinomialModel.fit(X_train,y_train) MultinomialModel.score(X_test,y_test) ###Output _____no_output_____ ###Markdown Se comparan las predicciones con los modelos entrenados ###Code gnb_y_pred=GaussianModel.predict(X_test) print(GaussianModel.predict(X_test[:10])) GNB_proba=GaussianModel.predict_proba(X_test) gnb_score=accuracy_score(y_test,gnb_y_pred) print("score: "+str(gnb_score)) #print(X_test[:10].Class_Name) bnb_y_pred=BernoulliModel.predict(X_test) print(BernoulliModel.predict(X_test[:10])) #print(BernoulliModel.predict_proba(X_test[:10])) BNB_proba=BernoulliModel.predict_proba(X_test) bnb_score=accuracy_score(y_test,bnb_y_pred) print("score: "+str(bnb_score)) mnb_y_pred=MultinomialModel.predict(X_test)#[:10] print(MultinomialModel.predict(X_test[:10])) MNB_proba=MultinomialModel.predict_proba(X_test)#[:,1] mnb_score=accuracy_score(y_test,mnb_y_pred) print("score: "+str(mnb_score)) ###Output [1 0 0 1 1 1 1 1 0 0] score: 0.896551724137931 ###Markdown Diferencia de los Algoritmos de Naive Bayes:* **Bernoulli**: Asume que todos los atributos son binarios, y solo toman 2 valores, 0 y 1.* **Multinomial**: Se usa para datos discretos.* **Gaussian**: Se usa donde todos los atributos son continuos, y pueden variar en tener diferentes valores. No se representa los atributos en terminos de aparicion. Calcular las Metricas Matriz de confusion ###Code ConfussionMatrix_Multinomial = confusion_matrix(y_test, mnb_y_pred) print(ConfussionMatrix_Multinomial) TP_MNB =ConfussionMatrix_Multinomial[0][0] #True Positives (multinomial) TN_MNB =ConfussionMatrix_Multinomial[1][1] #True Negatives ( multinomial) FP_MNB =ConfussionMatrix_Multinomial [0][1]#False Positives (multinomial) FN_MNB =ConfussionMatrix_Multinomial [1][0] #False Negatives (multinomial) ConfussionMatrix_Bernoulli = confusion_matrix(y_test, bnb_y_pred) print(ConfussionMatrix_Bernoulli) TP_BNB = ConfussionMatrix_Bernoulli[0][0]#True Positives (Bernoulli) TN_BNB = ConfussionMatrix_Bernoulli[1][1]#True Negatives (Bernoulli ) FP_BNB = ConfussionMatrix_Bernoulli[0][1]#False Positives (Bernoulli) FN_BNB = ConfussionMatrix_Bernoulli[1][0]#False Negatives (Bernoulli) ConfussionMatrix_Gaussian = confusion_matrix(y_test, gnb_y_pred) print(ConfussionMatrix_Gaussian) TP_GNB = ConfussionMatrix_Gaussian[0][0]#True Positives (Gaussian) TN_GNB = ConfussionMatrix_Gaussian[1][1]#True Negatives (Gaussian ) FP_GNB = ConfussionMatrix_Gaussian[0][1]#False Positives (Gaussian) FN_GNB = ConfussionMatrix_Gaussian[1][0]#False Negatives (Gaussian) ###Output [[60 0] [ 0 27]] ###Markdown Se puede observar que para Gaussian no hay falsos positivos ni falso negativo, y se mantiene en un margen de 0.Le sigue el Bernoulli y despues Multinomial. Accuracy / Error Rate ###Code def accuracy(TP,TN,FP,FN): return (TP + TN) / (TP + TN + FP + FN) def error_rate(TP,TN,FP,FN): #1-accuracy return (FP+FN)/(TP+TN+FP+FN) print("Accuracy") print("Gaussian: "+ str(accuracy(TP_GNB,TN_GNB,FP_GNB,FN_GNB))) print("Bernoulli: "+ str(accuracy(TP_BNB,TN_BNB,FP_BNB,FN_BNB))) print("Multinomial: "+ str(accuracy(TP_MNB,TN_MNB,FP_MNB,FN_MNB))) print("\nError Rate") print("Gaussian: "+ str(error_rate(TP_GNB,TN_GNB,FP_GNB,FN_GNB))) print("Bernoulli: "+ str(error_rate(TP_BNB,TN_BNB,FP_BNB,FN_BNB))) print("Multinomial: "+ str(error_rate(TP_MNB,TN_MNB,FP_MNB,FN_MNB))) ###Output Accuracy Gaussian: 1.0 Bernoulli: 0.9310344827586207 Multinomial: 0.896551724137931 Error Rate Gaussian: 0.0 Bernoulli: 0.06896551724137931 Multinomial: 0.10344827586206896 ###Markdown Sensivity/ Specificity ###Code def specificity(TN, FP): return TN/(TN+FP) def sensivity(TP,FN): return TP/(TP+FN) print("Specificity") print("Gaussian: "+ str(specificity(TN_GNB,FP_GNB))) print("Bernoulli: "+ str(specificity(TN_BNB,FP_BNB))) print("Multinomial: "+ str(specificity(TN_MNB,FP_MNB))) print("\nSensivity") print("Gaussian: "+ str(sensivity(TP_GNB,FN_GNB))) print("Bernoulli: "+ str(sensivity(TP_BNB,FN_BNB))) print("Multinomial: "+ str(sensivity(TP_MNB,FN_MNB))) ###Output Specificity Gaussian: 1.0 Bernoulli: 0.8181818181818182 Multinomial: 0.75 Sensivity Gaussian: 1.0 Bernoulli: 1.0 Multinomial: 1.0 ###Markdown Precision/recall ###Code def precision(TP,FP): return TP/(TP+FP) def recall(TP,FN): return TP/(TP+FN) print("Precision") print("Gaussian: "+ str(precision(TP_GNB,FP_GNB))) print("Bernoulli: "+ str(precision(TP_BNB,FP_BNB))) print("Multinomial: "+ str(precision(TP_MNB,FP_MNB))) print("\nRecall") print("Gaussian: "+ str(recall(TP_GNB,FN_GNB))) print("Bernoulli: "+ str(recall(TP_BNB,FN_BNB))) print("Multinomial: "+ str(recall(TP_MNB,FN_MNB))) ###Output Precision Gaussian: 1.0 Bernoulli: 0.9 Multinomial: 0.85 Recall Gaussian: 1.0 Bernoulli: 1.0 Multinomial: 1.0 ###Markdown F-Measure ###Code #def fmeasure1(precision,recall): # return(2*precision*recall)/(recall*precision) def fmeasure(TP,FP,FN): return(2*TP)/(2*TP+FP+FN) print("Gaussian: "+ str(accuracy(TP_GNB,TN_GNB,FP_GNB,FN_GNB))) print("Bernoulli: "+ str(accuracy(TP_BNB,TN_BNB,FP_BNB,FN_BNB))) print("Multinomial: "+ str(accuracy(TP_MNB,TN_MNB,FP_MNB,FN_MNB))) ###Output Gaussian: 1.0 Bernoulli: 0.9310344827586207 Multinomial: 0.896551724137931 ###Markdown Curvas ROC ###Code #mantener probabilidades de la clase positiva/ shape para tener 1D array GNB_proba= GNB_proba[:,1] BNB_proba= BNB_proba[:,1] MNB_proba= MNB_proba[:,1] #AUC Score GNB_auc= roc_auc_score(y_test,GNB_proba) MNB_auc= roc_auc_score(y_test,MNB_proba) BNB_auc= roc_auc_score(y_test,BNB_proba) print("GNB-AUC: %0.2f"%GNB_auc) print("MNB-AUC: %0.2f"%MNB_auc) print("BNB-AUC: %0.2f"%BNB_auc) GNB_fpr, GNB_tpr, thresholds = roc_curve(y_test, GNB_proba) MNB_fpr, MNB_tpr, thresholds = roc_curve(y_test, MNB_proba) BNB_fpr, BNB_tpr, thresholds = roc_curve(y_test, BNB_proba) plt.plot(BNB_fpr, BNB_tpr, color="blue", label="Bernoulli",linestyle="-") plt.plot(MNB_fpr, MNB_tpr, color="orange", label="Multinomial",linestyle="-.") plt.plot(GNB_fpr, GNB_tpr, color="red", label="Gaussian",linestyle=":") plt.plot([0, 1], [0, 1], color='darkblue', linestyle='--') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver Operating Characteristic (ROC) Curve') plt.legend() plt.show() ###Output GNB-AUC: 1.00 MNB-AUC: 1.00 BNB-AUC: 1.00 ###Markdown Analisis de los resultados obtenidos ###Code mobile_dev=["Gaussian","Multinomial","Bernoulli"] numeros=[gnb_score,mnb_score,bnb_score] colores=['violet',"#3880ff",'gold'] expansion=(0.1,0,0) #mostrar datos plt.pie(numeros,explode=expansion,labels=mobile_dev,colors=colores, autopct="%1.1f%%",shadow=True,startangle=45) plt.axis("equal") plt.show() ###Output _____no_output_____
0.14/_downloads/plot_mne_inverse_psi_visual.ipynb
###Markdown =====================================================================Compute Phase Slope Index (PSI) in source space for a visual stimulus=====================================================================This example demonstrates how the Phase Slope Index (PSI) [1] can be computedin source space based on single trial dSPM source estimates. In addition,the example shows advanced usage of the connectivity estimation routinesby first extracting a label time course for each epoch and then combiningthe label time course with the single trial source estimates to compute theconnectivity.The result clearly shows how the activity in the visual label precedes morewidespread activity (a postivive PSI means the label time course is leading).References----------[1] Nolte et al. "Robustly Estimating the Flow Direction of Information inComplex Physical Systems", Physical Review Letters, vol. 100, no. 23,pp. 1-4, Jun. 2008. ###Code # Author: Martin Luessi <[email protected]> # # License: BSD (3-clause) import numpy as np import mne from mne.datasets import sample from mne.minimum_norm import read_inverse_operator, apply_inverse_epochs from mne.connectivity import seed_target_indices, phase_slope_index print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif' fname_raw = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' fname_event = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' fname_label = data_path + '/MEG/sample/labels/Vis-lh.label' event_id, tmin, tmax = 4, -0.2, 0.3 method = "dSPM" # use dSPM method (could also be MNE or sLORETA) # Load data inverse_operator = read_inverse_operator(fname_inv) raw = mne.io.read_raw_fif(fname_raw) events = mne.read_events(fname_event) # pick MEG channels picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True, exclude='bads') # Read epochs epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13, eog=150e-6)) # Compute inverse solution and for each epoch. Note that since we are passing # the output to both extract_label_time_course and the phase_slope_index # functions, we have to use "return_generator=False", since it is only possible # to iterate over generators once. snr = 1.0 # use lower SNR for single epochs lambda2 = 1.0 / snr ** 2 stcs = apply_inverse_epochs(epochs, inverse_operator, lambda2, method, pick_ori="normal", return_generator=False) # Now, we generate seed time series by averaging the activity in the left # visual corex label = mne.read_label(fname_label) src = inverse_operator['src'] # the source space used seed_ts = mne.extract_label_time_course(stcs, label, src, mode='mean_flip') # Combine the seed time course with the source estimates. There will be a total # of 7500 signals: # index 0: time course extracted from label # index 1..7499: dSPM source space time courses comb_ts = zip(seed_ts, stcs) # Construct indices to estimate connectivity between the label time course # and all source space time courses vertices = [src[i]['vertno'] for i in range(2)] n_signals_tot = 1 + len(vertices[0]) + len(vertices[1]) indices = seed_target_indices([0], np.arange(1, n_signals_tot)) # Compute the PSI in the frequency range 8Hz..30Hz. We exclude the baseline # period from the connectivity estimation fmin = 8. fmax = 30. tmin_con = 0. sfreq = raw.info['sfreq'] # the sampling frequency psi, freqs, times, n_epochs, _ = phase_slope_index( comb_ts, mode='multitaper', indices=indices, sfreq=sfreq, fmin=fmin, fmax=fmax, tmin=tmin_con) # Generate a SourceEstimate with the PSI. This is simple since we used a single # seed (inspect the indices variable to see how the PSI scores are arranged in # the output) psi_stc = mne.SourceEstimate(psi, vertices=vertices, tmin=0, tstep=1, subject='sample') # Now we can visualize the PSI using the plot method. We use a custom colormap # to show signed values v_max = np.max(np.abs(psi)) brain = psi_stc.plot(surface='inflated', hemi='lh', time_label='Phase Slope Index (PSI)', subjects_dir=subjects_dir, clim=dict(kind='percent', pos_lims=(95, 97.5, 100))) brain.show_view('medial') brain.add_label(fname_label, color='green', alpha=0.7) ###Output _____no_output_____
2nd-prize__SaitejaUtpala__Estimators-of-Mean-of-SPD-Matrices/Shrinkage_Estimator_of_SPD_matrices.ipynb
###Markdown **Shrinkage Estimator of Mean of Symmetric Positive Definite (SPD) matrices**Following notebook implements and explores the use of Shrinkage Estimator for Frechet Mean on manifold of Symmetric Positive Definite (SPD) matrices. Shrinkage Estimators are class of estimators that are better than Maximum Likelihood Estimators in terms of error. TIn First Section, We will see about classic James Stein Estimator for euclidean spaces and how it performs significantly better than MLE(empirical mean). In Next Section, we will see about shrinkage estimator on manifold of SPD matrices and see it improves upon on MLE ###Code !pip3 install geomstats !pip3 install tqdm !pip3 install seaborn import numpy as np import geomstats.geometry.spd_matrices as spd import geomstats.backend as gs from sklearn.metrics import mean_squared_error from scipy.stats import invwishart import scipy.optimize as optimize from tqdm.notebook import tqdm import matplotlib.pyplot as plt import seaborn as sns sns.set() ###Output _____no_output_____ ###Markdown **Illustration of James Stein Estimator in $\mathbb{R}^n$**James and Stein showed that there exists class of estimators that have less quadratic error than Maximum Likelihood Estimator (MLE) for mean of Normal distribution. Specifically assuming $X_i \sim \mathcal{N}(\mu_i , \sigma^2)$ and assuming $\sigma^2$ is known the estimator$$X_{\text{js}} = \left( 1- \frac{(d-2)\sigma^2}{N\overline{X}} \right)\overline{X} , \text{ where } \overline{X} = \frac{1}{N} \sum_{i=1}^N X_i$$dominates $X_{\text{mle}} = \frac{1}{N} \sum_{i=1}^N X_i $, when $d \geq 3$. Note that assumption $\sigma^2$ is known can be relaxed. By using substituting $\sigma^2$ with $\text{mle}$, one does get similar result. ###Code def generate_data(N,d): """ N : number of data points m : dimension of each data point """ means = np.full((d), 0) sigmas = np.full((d),1) data = np.random.normal(means,sigmas,(N,d)) return data,means,sigmas def mle_and_jse(data): """ data : numpy array Returns mle : Maximum Likeilhood Estimator jse : James Stein Estimator """ mle = data.mean(axis = 0) N = data.shape[0] d = data.shape[1] jse_coeff = (1-(d-2)/(N*np.linalg.norm(mle)**2)) jse = jse_coeff * mle return mle,jse def get_rmse(data,mean,sigma): mle,jse = mle_and_jse(data) mle_rmse = mean_squared_error(mle,mean,squared=False) jse_rmse = mean_squared_error(jse,mean,squared=False) return mle_rmse,jse_rmse N = 10000 d = 10000 data,mean,sigma = generate_data(N,d) mle_rmse, jse_rmse = get_rmse(data,mean,sigma) print("RMSE for Maximum Likelihood Estimator" , mle_rmse) print("RMSE for James Stein Estimator" , jse_rmse) print("Improvement", mle_rmse/jse_rmse) ###Output RMSE for Maximum Likelihood Estimator 0.009949317889297217 RMSE for James Stein Estimator 9.961220947382957e-05 Improvement 99.88050603285868 ###Markdown Analysis of Big-data and Low-data regime ###Code def analysis(Ns,ds): scores = np.zeros((len(ds),len(Ns))) for i,d in tqdm(enumerate(ds)): for j,N in tqdm(enumerate(Ns)): data,mean,sigma = generate_data(N,d) mle,jse = mle_and_jse(data) mle_rmse = mean_squared_error(mle,mean,squared=False) jse_rmse = mean_squared_error(jse,mean,squared=False) scores[i,j] = mle_rmse/jse_rmse return scores # Ns = np.arange(10,50000*2,400) # print(len(Ns)) # ds = [1000,100,10,5] # scores = analysis(Ns,ds) ###Output 250 ###Markdown Following plots show that Improvement of James Stein Estimator (Improvement = $\frac{\text{RMSE of MLE estimator}}{\text{RMSE of JS estimator}}$ ) for Dimensions $(d=5,10,100,100)$ and varying $N$ from $10$ to $1000,000$ in steps of $400$ ![download (3).png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAnkAAAFSCAYAAACOisnJAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nO3deXxU9b3/8fc5k40QQkgIybAI1w1juRVIhKJYFdQga12hXrUVlx8XEdQHKlVLEFwaoCgqFmzRKqX11hUBH4CKdaUsAmIURBEtkhAgYcm+zDm/P5IMGQw6JDlnxuH1fDx4kMx35sxn5jvLO9/zPd9j2LZtCwAAABHFDHUBAAAAaH2EPAAAgAhEyAMAAIhAhDwAAIAIRMgDAACIQIQ8AACACETIAxARpk6dqnnz5oW6DL/8/Hz16dNHPp/P8fu67rrr9OKLLzp+PwB+WqJCXQAA/JhBgwZp//798ng88ng8OvXUUzVq1CiNHj1apln3t+r06dNDXGWgzp07a9OmTaEuo9meeOIJzZ8/XzExMf7LXn/9dXXr1i2EVQE4HoQ8AD8J8+fP1znnnKOSkhKtW7dODz30kLZs2aJHHnkk1KVFrEsvvVSzZ88OdRkAmondtQB+Utq1a6fBgwfrscce06uvvqrt27dLkqZMmaJHH31UkrR27Vr98pe/1J///GcNGDBAAwcO1FtvvaV3331X2dnZ6tevn+bPn+/fpmVZevrpp3XRRRepf//+mjRpkg4ePChJ+u6779SzZ0+9+uqruuCCC9S/f3/96U9/8t92y5Ytuvzyy9W3b1+dc845/tDZcLva2lpJUmFhocaNG6d+/frp4osv1j//+U//Np544glNmjRJd999t/r06aNhw4bp008/PeZz8OGHH2rIkCHKzMzU9OnTxYmLADSFkAfgJ+nnP/+50tPTtWHDhibb9+/fr6qqKr333nuaOHGi7r//fr3++ut6+eWXtXjxYj311FPatWuXJGnRokV666239Le//U3vv/++2rdv/73dvx9//LFWrFih5557TvPmzdOOHTskSQ899JCuv/56bdy4UW+++aYuvfTSJuu58847lZ6ervfff1+PP/645syZozVr1vjbV69erWHDhmnDhg0aNGiQZsyY0eR2iouLNWHCBN1+++3697//rZNOOkkbN2485vO0dOlSZWVlHfNffn7+MW/7zjvvqF+/fho2bJj+/ve/H/N6AMITu2sB/GR16tRJhw4darItKipK//u//yuPx6OhQ4fq97//va6//nolJCTotNNO06mnnqovvvhC3bp10wsvvKCpU6cqPT1dkjRhwgRdeOGF/lG4hsvi4uJ0xhln6IwzztC2bdt0yimnKCoqSv/5z39UXFys5ORk9e7d+3u1FBQUaOPGjVqwYIFiY2OVkZGhq666SkuWLNGAAQMkSZmZmTr//PMlSaNGjdJzzz3X5ON67733dNppp2nIkCGSpN/85jd65plnjvkcjRgxQiNGjAji2Qx06aWX6uqrr1bHjh31ySefaOLEiUpMTNTw4cOPe1sAQoORPAA/WYWFhWrfvn2TbUlJSfJ4PJKkuLg4SVJKSoq/PTY2VmVlZZLqjoS99dZb/aNbQ4cOlWmaKioq8l+/Y8eO/p/btGmj8vJySXUjed98840uvfRSXXHFFXrnnXe+V8vevXvVvn17JSQk+C/r3LmzCgsLm9x+XFycqqqqAkJm4201hFFJMgxDXq+3yeegJU499VSlpaXJ4/Gob9++uv7667Vy5cpWvx8AzmEkD8BP0pYtW1RYWKjMzMwWbys9PV0PP/xwk9v67rvvfvC2PXr00Jw5c2RZllatWqWJEydq7dq1AddpGHEsLS31B72CggKlpaUdd62pqanas2eP/3fbtlVQUHDM67/++uvKyck5Zvvy5cvVuXPnoO6buX/ATwsjeQB+UkpLS/XOO+/ozjvv1MiRI9WzZ88Wb/PXv/61HnvsMe3evVtS3by3t956K6jbLlmyRMXFxTJNU4mJiZLkX9algdfrVZ8+fTRnzhxVVVVp27ZteumllzRy5MjjrvX888/Xl19+qVWrVqm2tlbPP/+89u/ff8zrjxw5Ups2bTrmv2MFvLfeekuHDh2SbdvasmWLFi1apMGDBx93vQBCh5E8AD8J48aNk8fjkWmaOvXUU3XDDTdozJgxrbLt66+/XrZta+zYsdq7d69SUlI0dOhQXXTRRT962/fff19/+MMfVFlZqc6dO+vRRx/17x5ubM6cOcrJydF5552nxMRE3XbbbTrnnHOOu9bk5GTNnTtXDz30kH73u99p1KhR6tu373Fv58e88cYbuu+++1RdXa20tDTdfPPNuuyyy1r9fgA4x7AZfwcAAIg47K4FAACIQIQ8AACACETIAwAAiECEPAAAgAjkWsjLzc3VoEGD1LNnT/+5Jht78sknv9e2efNmjRw5UtnZ2Ro7dmzAwqTNbQMAADgRuBbyBg8erMWLF6tLly7fa/vss8+0efPmgDbLsnTXXXdp6tSpWrlypbKysjR79uwWtQEAAJwoXFsnLysrq8nLq6urNX36dP3xj3/U9ddf7788Ly9PsbGx/tuNGTNGgwcP1iOPPNLstuNx4ECZLMu51WVSUhJUVFTq2PbRPPRLeKJfwhP9En7ok/DkZL+YpqEOHdo22RbyxZDnzp2rkSNHqmvXrgGXFxQUBKzEnpycLMuydPDgwWa3JSUlBV3XsZ6w1pSSkvDjV4Lr6JfwRL+EJ/ol/NAn4SkU/RLSkLdp0ybl5eVp8uTJoSyjSUVFpY6O5KWmttO+fSWObR/NQ7+EJ/olPNEv4Yc+CU9O9otpGscMkCENeevXr9eOHTv850Pcs2ePbrzxRj3yyCPyer3Kz8/3X7fh3JBJSUnNbgMAADhRhHQJlVtuuUUffPCBVq9erdWrVys9PV0LFy7UwIED1atXL1VWVmrDhg2SpBdeeEFDhgyRpGa3AQAAnChcG8l78MEHtWrVKu3fv1833HCDkpKStHz58mNe3zRNzZw5Uzk5OaqqqlKXLl00a9asFrUBAACcKAzbtp2bePYTxpy8ExP9Ep7ol/BEv4Qf+iQ8hWpOHme8AAAAiECEPAAAgAhEyAMAAIhAhDwAgOtefneHVq3fFeoygIhGyAMAuO7THUX6/JviUJcBRDRCHgDAdZYtsbYD4CxCHgDAdbZssYIX4CxCHgDAdbYtQh7gMEIeAMB1tm3LwfXmAYiQBwAIAYuRPMBxhDwAgOtsi5E8wGmEPACA6yybAy8ApxHyAACus1lCBXAcIQ8A4DqWUAGcR8gDALjOtsWcPMBhhDwAgOuYkwc4j5AHAHAdc/IA5xHyAACusxnJAxxHyAMAuI45eYDzCHkAANdZFiN5gNMIeQAA19myZRHyAEcR8gAArrM48AJwHCEPAOA6DrwAnEfIAwC4jiVUAOcR8gAArrNt5uQBTiPkAQBcx0ge4DxCHgDAdZZtyxYpD3ASIQ8A4DpG8gDnuRbycnNzNWjQIPXs2VPbt2+XJB04cEA333yzsrOzNWLECE2YMEHFxcX+22zevFkjR45Udna2xo4dq6Kioha3AQBCq2EunsUpLwBHuRbyBg8erMWLF6tLly7+ywzD0E033aSVK1dq6dKl6tatm2bPni1JsixLd911l6ZOnaqVK1cqKyurxW0AgNBrWDqFJVQAZ7kW8rKysuT1egMuS0pKUv/+/f2/9+7dW/n5+ZKkvLw8xcbGKisrS5I0ZswYrVixokVtAIDQa8h2DOQBzooKdQENLMvSP/7xDw0aNEiSVFBQoM6dO/vbk5OTZVmWDh482Oy2pKSkoOtJSUlohUf1w1JT2zl+Hzh+9Et4ol/CU3P6pbrGJ0kyDPrVCTyn4SkU/RI2IW/GjBmKj4/XtddeG+pSJElFRaWOzhdJTW2nfftKHNs+mod+CU/0S3hqbr9U1Yc8n8+mX1sZ75Xw5GS/mKZxzIGpsAh5ubm5+vbbbzV//nyZZt0eZK/X6991K0nFxcUyTVNJSUnNbgMAhJ5/Th5LqACOCvkSKnPmzFFeXp7mzZunmJgY/+W9evVSZWWlNmzYIEl64YUXNGTIkBa1AQBCjzl5gDtcG8l78MEHtWrVKu3fv1833HCDkpKS9Nhjj2nBggXq0aOHxowZI0nq2rWr5s2bJ9M0NXPmTOXk5KiqqkpdunTRrFmzJKnZbQCA0OPoWsAdhs27rEnMyTsx0S/hiX4JT83tl9KKGk2c+76io0wtmHxB6xd2AuO9Ep5CNScv5LtrAQAnFhZDBtxByAMAuKph/xH7kQBnEfIAAK5iTh7gDkIeAMBV/pE8EfQAJxHyAACuahzsiHiAcwh5AABXWY1DHiN5gGMIeQAAVzXOdWQ8wDmEPACAq2xG8gBXEPIAAK5qnOtYKg9wDiEPAOCqxnPyWBAZcA4hDwDgKos5eYArCHkAAFcFLqFCygOcQsgDALiKo2sBdxDyAACuajySZ5HyAMcQ8gAArmIkD3AHIQ8A4CrOeAG4g5AHAHAVI3mAOwh5AABXccYLwB2EPACAq1gMGXAHIQ8A4KqA05qFrgwg4hHyAACuYnct4A5CHgDAVZzWDHAHIQ8A4CpG8gB3EPIAAK4KmJNHxgMcQ8gDALiKkTzAHYQ8AICrmJMHuIOQBwBwFSN5gDtcCXm5ubkaNGiQevbsqe3bt/sv37lzp0aPHq3s7GyNHj1a33zzjaNtAIDQC1gMmZAHOMaVkDd48GAtXrxYXbp0Cbg8JydH11xzjVauXKlrrrlGU6dOdbQNABB6nLsWcIcrIS8rK0terzfgsqKiIn3++ecaPny4JGn48OH6/PPPVVxc7EgbACA82IzkAa6ICtUdFxQUKC0tTR6PR5Lk8XjUqVMnFRQUyLbtVm9LTk4OzQMFAARgJA9wR8hCXrhLSUlw/D5SU9s5fh84fvRLeKJfwlNz+iVh92H/z+3bt6FvWxnPZ3gKRb+ELOR5vV4VFhbK5/PJ4/HI5/Np79698nq9sm271duOV1FRqSwHV+lMTW2nfftKHNs+mod+CU/0S3hqbr8cOlTh//nAgXLtS4hpzbJOaLxXwpOT/WKaxjEHpkK2hEpKSooyMjK0bNkySdKyZcuUkZGh5ORkR9oAAOGBJVQAdxi2C++wBx98UKtWrdL+/fvVoUMHJSUlafny5dqxY4emTJmiw4cPKzExUbm5uTr55JMlyZG248FI3omJfglP9Et4am6/rMnboz8v+1ySdNev+yije4fWLu2ExXslPIVqJM+VkPdTRMg7MdEv4Yl+CU/N7ZcPPy3QwuVbJUmTx/TWmT3Y29JaeK+EpxNudy0A4MTEYsiAOwh5AABXsYQK4A5CHgDAVRx4AbiDkAcAcFXjXOfg1GfghEfIAwC4ipE8wB2EPACAqyzm5AGuIOQBAFzFSB7gDkIeAMBVHF0LuIOQBwBwlc06eYArCHkAAFdZAUfXEvIApxDyAACuCpyTF8JCgAhHyAMAuMriwAvAFYQ8AICrOPACcAchDwDgKg68ANxByAMAuIqRPMAdhDwAgKuYkwe4g5AHAHAVI3mAOwh5AABX2WIkD3BD0CHvwQcfbPLyhx56qNWKAQBEPstq9DMZD3BM0CHvlVdeafLy119/vdWKAQBEPo6uBdwR9WNXeOmllyRJPp/P/3ODXbt2KSkpyZnKAAARiTl5gDt+NOQtWbJEklRTU+P/WZIMw1DHjh2Vm5vrXHUAgIjD0bWAO3405C1atEiS9Oijj+qOO+5wvCAAQGSzbckw6v4n4wHO+dGQ16Ah4BUVFam8vDygrVu3bq1bFQAgYtm2LY9pqNZnM5IHOCjokPf+++/r3nvv1b59+wIuNwxDW7dubfXCAACRybYl0zQkn82BF4CDgg55DzzwgMaPH6/LLrtMcXFxTtYEAIhglupG8iR21wJOCjrkHT58WGPGjJFhGE7WAwCIcLYtmfXfJYzkAc4Jep28K664Qi+//LKTtQAATgCWZcvjqfv6IeMBzgl6JO+TTz7RokWL9Oc//1kdO3YMaFu8eHGLinjnnXc0d+5c2XbdJNwJEybokksu0c6dOzVlyhQdPHhQSUlJys3NVY8ePSSp2W0AgNBqOPCi4WcAzgg65F111VW66qqrWr0A27Z19913a/HixTr99NO1bds2/frXv9ZFF12knJwcXXPNNRo1apSWLFmiqVOn6vnnn5ekZrcBAEKrYQkVw+C0ZoCTgg55l112mWNFmKapkpISSVJJSYk6deqkAwcO6PPPP9ezzz4rSRo+fLhmzJih4uJi2bbdrLbk5GTHHgMAIDi2bcuQIdMwGMkDHBR0yLNtWy+++KKWLVumAwcOaOnSpVq/fr327dunoUOHNrsAwzD02GOPafz48YqPj1dZWZmefvppFRQUKC0tTR6PR5Lk8XjUqVMnFRQUyLbtZrUdT8hLSUlo9mMKVmpqO8fvA8ePfglP9Et4ak6/xMRGKSrKlGEYatMmhr5tZTyf4SkU/RJ0yJs7d64++ugj/eY3v1FOTo4kKT09XY888kiLQl5tba0WLFigp556SpmZmfr44491++23a+bMmc3eZmsoKiqV5eB+hNTUdtq3r8Sx7aN56JfwRL+Ep+b2S0VljWzLlmFIZWVV9G0r4r0SnpzsF9M0jjkwFfTRta+++qrmz5+vYcOG+ZdR6dq1q3bt2tWi4rZu3aq9e/cqMzNTkpSZmak2bdooNjZWhYWF8vl8kiSfz6e9e/fK6/XK6/U2qw0AEHqN5+SxtxZwTtAhz+fzqW3btpLkD3llZWWKj49vUQHp6enas2ePvv76a0nSjh07VFRUpO7duysjI0PLli2TJC1btkwZGRlKTk5WSkpKs9oAAKFn27YMw5BhGKyTBzgo6N21559/vh555BHde++9kurepHPnztWFF17YogJSU1M1bdo0TZo0yR8eH374YSUlJWnatGmaMmWKnnrqKSUmJio3N9d/u+a2AQBCy6ofyTMNFkMGnGTYQR7aVFpaqnvuuUfvvfeeamtrFRsbq3PPPVe5ublKSHD+IAW3MSfvxES/hCf6JTw1t1/mvfKp9hSX60BJlQb0Stf/XHy6A9WdmHivhKdQzckLeiQvISFB8+bN0/79+5Wfny+v16vU1NRWKxIAcGKw6nfXmiZLqABOCnpOXoO4uDilpaXJsiwVFhaqsLDQiboAABGq7ty1HHgBOC3okbyPPvpIv//975Wfnx/wl5dhGNq6dasjxQEAIk/jAy8YyQOcE3TIu++++zR+/HgNHTpUcXFxTtYEAIhgtjitGeCGoENeVVWVLr/8cv+ZJAAAaA7/nDxG8gBHBT0n77e//a3+8pe/8IYEALQIc/IAdwQ9knfJJZfoxhtv1IIFC9ShQ4eAtrfffrvVCwMARCb/nDwxkgc4KeiQN3HiRGVlZWnIkCHMyQMANFvj05qxGDLgnKBD3nfffafXXntNpnncq64AAOBnWbZM/5y8UFcDRK6gE9vgwYP173//28laAAAngLrdtYzkAU4LeiSvurpa//u//6usrCylpKQEtM2cObPVCwMARCZLksd/xotQVwNErqBD3mmnnabTTjvNyVoAACcA27brj67lwAvASUGHvAkTJjhZBwDgBFF34IXBEiqAw4IOeZL04Ycfavny5SouLtb8+fP16aefqrS0VAMGDHCqPgBAhGm8hApz8gDnBH3gxaJFizRt2jT16NFD69evlyTFxcVp7ty5jhUHAIg8Vv0SKiYjeYCjgg55zz33nJ599lndcsst/mVUTj75ZO3cudOx4gAAkaduTp7BnDzAYUGHvLKyMnm9Xkl1cykkqba2VtHR0c5UBgCISIGLIYe6GiByBR3yzj77bD399NMBlz3//PPq379/qxcFAIhcFiN5gCuCPvDi/vvv17hx4/Tiiy+qrKxM2dnZatu2rRYsWOBkfQCACGMHzMkj5AFOCTrkderUSS+//LK2bNmi/Px8eb1e/fznP+c0ZwCA4+I/utY02F0LOCjokPfXv/5Vw4cP11lnnaWzzjrLyZoAABHMf3StGMkDnBT0MNy6des0ePBg/fa3v9XLL7+s0tJSJ+sCAESowKNrQ10NELmCDnlPPfWU3n//fQ0bNkxLlizRwIEDddttt2nVqlVO1gcAiDB1u2tVf8YLUh7glOOaUJeYmKirrrpKzz//vN544w2VlZVp0qRJTtUGAIhAR05rZsgKdTFABDuu05pJ0oYNG7R8+XKtXLlSSUlJuu2225yoCwAQoRpG8ji6FnBW0CEvNzdXK1askGEYuvTSS7Vw4UJlZGQ4WRsAIAJZjUfyGMoDHBN0yKuoqNCsWbOUlZXlZD0AgAjHYsiAO4IOedOmTZMk5efnq7CwUGlpaercubNTdQEAIpRt1+2qrTvwItTVAJEr6JC3b98+3XHHHdq8ebOSkpJ08OBBnXXWWZozZ47S0tJaVERVVZUefvhhrVmzRrGxserdu7dmzJihnTt3asqUKTp48KCSkpKUm5urHj16SFKz2wAAodWwGLLJSB7gqKCPrs3JydEZZ5yhdevW6YMPPtC6deuUkZGhnJycFhcxa9YsxcbGauXKlVq6dKn/iN2cnBxdc801Wrlypa655hpNnTo1oJ7mtAEAQqvhtGaGIc54ATgo6JD38ccf65577lF8fLwkKT4+Xnfffbc2bdrUogLKysr02muvadKkSTIMQ5LUsWNHFRUV6fPPP9fw4cMlScOHD9fnn3+u4uLiZrcBAEIvYCRPpDzAKUHvrm3fvr127NihM844w3/Z119/rcTExBYVsGvXLiUlJenJJ5/U2rVr1bZtW02aNElxcXFKS0uTx+ORJHk8HnXq1EkFBQWybbtZbcnJyUHXlZKS0KLHFYzU1HaO3weOH/0SnuiX8NTcfmkbH6PKGkumadK3rYznMzyFol+CDnk33XSTfvvb3+rKK69U586dlZ+fr1deeaXFiyH7fD7t2rVLZ555pu655x598sknGjdunObOndui7bZUUVGpLAf3I6SmttO+fSWObR/NQ7+EJ/olPDW3X3yWrcrKGlVX16q21kfftiLeK+HJyX4xTeOYA1NBh7yrr75a3bp107Jly/TFF1+oU6dO+uMf/6gBAwa0qDiv16uoqCj/7tWzzjpLHTp0UFxcnAoLC+Xz+eTxeOTz+bR37155vV7Ztt2sNgBA6DEnD3BHUCHP5/MpOztbb7zxRotD3dGSk5PVv39/ffjhhxo4cKB27typoqIi9ejRQxkZGVq2bJlGjRqlZcuWKSMjw7/LtbltAIDQCpiTR8oDHBNUyPN4PPJ4PKqqqlJMTEyrF/HAAw/o3nvvVW5urqKiojRz5kwlJiZq2rRpmjJlip566iklJiYqNzfXf5vmtgEAQuvIYsh1PwNwRtC7a6+//nrdfvvt+n//7/8pPT3dfySsJHXr1q1FRXTr1k2LFi363uWnnHKKXnzxxSZv09w2AEBoHdlda7AYMuCgoEPejBkzJEkffvhhwOWGYWjr1q2tWxUAIGJZ9btrDUMsoQI4KOiQt23bNifrAACcIBpOa2Yykgc4KujFkBsUFhZqy5YtKiwsdKIeAEAEaziNWd1InsGcPMBBQY/k5efna/Lkydq8ebPat2+vQ4cOqXfv3po1a5a6dOniZI0AgAjRkOkMo240j4wHOCfokbx77rlHP/vZz7RhwwatWbNG69evV69evTRlyhQn6wMARBDrqJE8m5QHOCbokbzPPvtMzzzzjKKjoyVJbdu21eTJk9W/f3/HigMARJaGTGfWL4ZMxgOcE/RIXu/evbVly5aAy/Ly8tSnT59WLwoAEJm+NyePxZABxwQ9ktetWzfdcsstuuCCC5Senq49e/bo3Xff1fDhwwPOM9vSc9kCACJXw+5akyVUAMcFHfKqq6t1ySWXSJKKi4sVExOjiy++WFVVVdqzZ49jBQIAIkfggRcG564FHBR0yHvkkUecrAMAcAII3F0rDrwAHBR0yJOkiooKffvttyovLw+4vG/fvq1aFAAgMllHjeSR8QDnBB3yXnvtNU2fPl3R0dGKi4vzX24Yhv71r385URsAIMLYAXPyWEIFcFLQIW/WrFl64okndO655zpZDwAggjWek8cSKoCzgl5CJTo6Wv369XOyFgBAhOO0ZoB7gg55kyZN0h/+8AcVFxc7WQ8AIIJZnNYMcE3Qu2t79Oihxx9/XH//+9/9l9m2LcMwtHXrVkeKAwBElqPn5El1a+eZ9T8DaD1Bh7y7775bo0aN0tChQwMOvAAAIFhHzl1b90+qD36EPKDVBR3yDh48qEmTJvn/8gIA4HgdOXftkZE8dtkCzgh6Tt7ll1+uJUuWOFkLACDC2Y1G8kwj8DIArSvokbwtW7bob3/7m/70pz+pY8eOAW2LFy9u9cIAAJHnyBIqhn8eHqc2A5wRdMi7+uqrdfXVVztZCwAgwgXOyWvYXUvKA5zwoyFvzZo1kqT09HTHiwEARLbAOXmBlwFoXT8a8u67774fbDcMQ2+//XarFQQAiFxHL4bc+DIAretHQ97q1avdqAMAcALwz8nTkVVTmJMHOCPoo2sBAGiphjl5ptn4wAtSHuAEQh4AwDV2o9OaMScPcBYhDwDgGos5eYBrwirkPfnkk+rZs6e2b98uSdq8ebNGjhyp7OxsjR07VkVFRf7rNrcNABA6R46uZSQPcFrYhLzPPvtMmzdvVpcuXSRJlmXprrvu0tSpU7Vy5UplZWVp9uzZLWoDAIRW46NrTUbyAEeFRcirrq7W9OnTNW3aNP9leXl5io2NVVZWliRpzJgxWrFiRYvaAACh1dScPA68AJwRFiFv7ty5GjlypLp27eq/rKCgQJ07d/b/npycLMuydPDgwWa3AQBCy2pyJC+UFQGRK+jTmjll06ZNysvL0+TJk0NdSoCUlATH7yM1tZ3j94HjR7+EJ/olPB1vvxQerpIkdUiKl2XUjTN06NBWqR3btnptJyreK+EpFP0S8pC3fv167dixQ4MHD5Yk7dmzRzfeeKOuu+465efn+69XXFws0zSVlJQkr9fbrLbjUVRUKsvBFTpTU9tp374Sx7aP5qFfwhP9Ep6a0y8HDpRLkg4fqlBpaUxjUk4AAB3DSURBVF3gKyoqVZRttXp9JyLeK+HJyX4xTeOYA1Mh3117yy236IMPPtDq1au1evVqpaena+HChbrppptUWVmpDRs2SJJeeOEFDRkyRJLUq1evZrUBAELLbrQYMnPyAGeFfCTvWEzT1MyZM5WTk6Oqqip16dJFs2bNalEbACC0rEYHXhw540UICwIiWNiFvMbnyu3bt6+WLl3a5PWa2wYACB2bxZAB14R8dy0A4MTReCSvfm8tR9cCDiHkAQBc45+Tx0ge4DhCHgDANY0XQzbNwMsAtC5CHgDANf45eToyksfRtYAzCHkAANcEHl1b9zMZD3AGIQ8A4Brm5AHuIeQBAFzTEOcMFkMGHEfIAwC4puF0kaahRiN5oawIiFyEPACAaxovhmwedRmA1kXIAwC4pvESKganNQMcRcgDALjGCjitWd1ljOQBziDkAQBc05DnTEmmyZw8wEmEPACAa+yAkTyWUAGcRMgDALjGv4SKoUZLqISsHCCiEfIAAK5pmJNnmoZMRvIARxHyAACuOXJ0LYshA04j5AEAXNOwGLJhSIY48AJwEiEPAOCawHPXBl4GoHUR8gAArmm8GLLJac0ARxHyAACu8S+hIubkAU4j5AEAXGM1HsljMWTAUYQ8AIBrbDWek8cSKoCTCHkAANc0npN35MCL0NUDRDJCHgDANXajxZAbRvKYkwc4g5AHAHBNwJw8/2WEPMAJhDwAgGts/2LIjefkhbIiIHIR8gAArrH8S6iIxZABhxHyAACuse2GgMdIHuC0kIe8AwcO6Oabb1Z2drZGjBihCRMmqLi4WJK0efNmjRw5UtnZ2Ro7dqyKior8t2tuGwAgdGzZ/nBnMpIHOCrkIc8wDN10001auXKlli5dqm7dumn27NmyLEt33XWXpk6dqpUrVyorK0uzZ8+WpGa3AQBCy7aP7KY9cnRtCAsCIljIQ15SUpL69+/v/713797Kz89XXl6eYmNjlZWVJUkaM2aMVqxYIUnNbgMAhJZlNxrJM1kMGXBSVKgLaMyyLP3jH//QoEGDVFBQoM6dO/vbkpOTZVmWDh482Oy2pKSkoGtJSUlonQf1A1JT2zl+Hzh+9Et4ol/C0/H2S5u4GJmmodTUdmpTXi1Jats2lv5tRTyX4SkU/RJWIW/GjBmKj4/XtddeqzfffDOktRQVlcpycB9Camo77dtX4tj20Tz0S3iiX8JTc/qlrLxKhiHt21ei8spaSVJJSSX920p4r4QnJ/vFNI1jDkyFTcjLzc3Vt99+q/nz58s0TXm9XuXn5/vbi4uLZZqmkpKSmt0GAAgt2z5ywEXD3Dzm5AHOCPmcPEmaM2eO8vLyNG/ePMXExEiSevXqpcrKSm3YsEGS9MILL2jIkCEtagMAhJZl2TLUcHQtc/IAJ4V8JO/LL7/UggUL1KNHD40ZM0aS1LVrV82bN08zZ85UTk6Oqqqq1KVLF82aNUuSZJpms9oAAKEVeHRt3f+c1gxwRshD3mmnnaYvvviiyba+fftq6dKlrdoGAAgdq9E6eSyGDDgrLHbXAgBODE3NyWN3LeAMQh4AwDV243XyGMkDHEXIAwC4pi7k1f3MnDzAWYQ8AIBrLFsBc/IMMZIHOIWQBwBwjW3b/t20Ul3Qs0XKA5xAyAMAuKbxEipS3c+WFbp6gEhGyAMAuMZqaiSP/bWAIwh5AADXHD2SZxrMyQOcQsgDALim8RIqUt1IHkfXAs4g5AEAXGM1MSePjAc4g5AHAHBNk0fXkvIARxDyAACuYU4e4B5CHgDANU3OyWOdPMARhDwAgGtsWwG7axnJA5xDyAMAuMZqdO5aqX4kzyLlAU4g5AEAXGN9b3etOPACcAghDwDgmrrdtUd+rzu6NnT1AJGMkAcAcM3RB16YjOQBjiHkAQBcc/QSKnVnvAhdPUAkI+QBAFzz/SVUJJslVABHEPIAAK6xjpqTZzInD3AMIQ8A4JqmFkNmTh7gDEIeAMA13z+6VszJAxxCyAMAuOb7R9caskl5gCMIeQAA1zS1GLLF7lrAEYQ8/KSUVtTouRXbdLisOtSlAGgGq4klVMh4gDMIefhJWb3xO727OV9vbtgV6lIANINt2zI5rRngCkIefjJ8lqV3N+dLkt7dnK+aWl+IKwJwvJpaDJmIBzgjYkPezp07NXr0aGVnZ2v06NH65ptvQl0SWmjT9v06UFKlizK7qrSiRuu37Q11SQCOU1OnNWNOXnizbVsfbCnQJ1/tD3UpOE5RoS7AKTk5Obrmmms0atQoLVmyRFOnTtXzzz8f6rJcV+uz5DGNgA/VcOKzLFVV+9QmNupHa1y98TulJMZp9OBT9dk3xXr74906p5fXpUrr7DtYoRXr/qOyihoN6ttVp3dLapXt1vosbdy+T5u/2q8zTuqgs8/opDaxP/z2tG1bldU++eqPTIyL8SjK0/y/2w6VVeuznUWqrrHU+7SOSkqIDep21lG73ySpusanw2XVSmoX26KanLanuFz/XP2V8ovKNOwX3XXuz73feyxFhyq172CFuqS2Vbv4mGbfl2XbOlRarTaxHsXFHOnbokOV+uybYiXGx+jMHh0UE+1p9n38FDQ5ktco4/ksS7atH33dlFXWqKyiRh3bt5FphufnW6jYtq2KqlrFxnjkMX/4efyx74han6XFb27370W5tP9Juvz8k390uz9WX7h+J0Uaw47AyRBFRUXKzs7W2rVr5fF45PP51L9/f61atUrJyclBbqNUlkOH9X/4aYFWrt+l2lqrbjeFHfxJfWzbVq3PlmXZio4yFeUxVV5Vq8r6N3RcjEe2LdX4LFVU1qq61pIhKTbGU/cv2iOj7i5l2bZsu+6UQrZdt23blqpr64JXdLSphLhoRUcFvpmP9d4MOGIuoOGo69X/X1XjU/HhKvksW3ExHrVPiJXPZ6nWZ8k0DZmG4f/fMKSConJdecEpGvqL7nr74++0+M3tSk6MlSHDX5NpGJJRfx+G4V+PK5gPFEOSJ8qUr9Zq+rmXtKeoXKYpxUZ7VFZZq5TEWHmCCDH+ezcM/89Go9pKyqtVUl6juBiPKqt9ivKYio+Lkqf+8XvM+ufCNGRZtmp9lkrKa1RV4wu4j3bxdf1lS/4vTrvh9WUf+blxm+p/L6+qDdhWekq8TNPQ0b1a/xSrxmfpYGm1Kqpq1TYuSgltoiXVvX4OllTJluQxDaUkxsnj+f7zb9tSZXWtKqp9io0y1SYuWp5jfFl7PKZ8vmP0Sws+wvYeqFB0lKm0DvH6trBEKYlxiov1qOENWVZZo4OlRw7y6dAuVtEe88hrTPL3aeP+PNIm//tt/6FKf3+1T4hRbJRHtZal4sNV/u3HRJtKSYzTkU0bDZs5wgj4Tz7LVnWNT7U+u/79Unc70zBk1P9u20e9Dvzv+boLGl4TDdexrLrPGI9pyOMxFeUx5DFN/2NseMo9HlO1TfTL0a/xxo9n74Fy9T4tVeN/1UuS9PCij/WfwhIlJcSqsrpWJeU1snXktVxWUSufZaltXLTiYqNkSKqortWh+n6Jjqp7zkwzcFFl/2u80QWNXykN76nqGp+qauqCjsc05LNs1dRaiok2FRcTVfcY7CPbafyeCfj8bvS8Bj7mxs9Lw/brXgcx0R7/ffr/+az6kU1D8bEexUR7VFNryWfZ/s8Cj6eu9oa6Gtcko+4Ph5r6z/74uChFR5nf+0yV6g5kK62okcc0lNg2RtGe+j5u9Jpu+Jy+9BcnqaLKp39t2q24GI/a1n83HE9Wq/VZOlxeo5oaS+3aRvs/Mxpee1b9C7Gp7yf/743+twI+0wLbG67fJvZIrU7w+WyVV9W9RuNiohQb7amrx7Jl2bZSk9rorl/3UWpqO+3bV+JIDaZpKCUlocm2iBzJKygoUFpamjyeur+IPR6POnXqpIKCgqBD3rGesNbQvUuFun93qP5NVP+FYNR9APwYw5D/DVtV41NtraWE+Bi1iY1SZVWtKqpqZdZ/WCXEx6htXJRqai1VVNeqqtqnivov8obgZNT/X/d73c+x0XWBsLrG0uGyKtX6Gn9wNvq5cWF24x+//0HblGiPqbSUeCW0idG+g+U6WFLlD66WXfcl46v/spGkn53cUVdc1FMJbaL1q0Gn6XBlrcora77/BdbEG//HBBuzB/y3VyPOO1lt20TrrXX/0bZvDgS/7aO+KI58+dT16Xl9uijzjDR9+Z8D+nBLft0onc/yPwd1XwJW/Qe9qfYJMUpOjFNMtEe2bJWW16j4cP2He/3rKSB4GEe/3gJ/T06MU5+enRQdZeqjT/K1s+DwkcfQ+EuknsdjKDkxTm3bRKukrFql5TWSUTcCk57SVsmJsSosLldhUbl8TXSCIalNbJTaxEWpusZSWUVN8/+wauagQP9eXl1+4alKSojVu5t266Mt+Uc2adR9CZ/erYM6p7bVN/mH9Z/CEvl8dl2fNgoRRwcoKTBIG4ahzIw0dU5NUFlFjQr2l6mm1pJpSj28icrMSFPRoUqt/2yPDpRWHdmwfvz9ZBqGYmM8io4y68JZ/XvHsuq+6BpGWht/zhjmkf5veKym/+e6EOEx6sJHrc9Sjc9S7VF//Bzrj7pjvcYbHkt3b6Iu6d9dqantJEmXX3ia/v1ZgQwZiov1qEO7OJmGVFxSpeoanxLbxshjGiopr1FFVa1s21ZsjEcnpbVTQnyMdhWWaN+BikaFNQ6ZR4Xk+sdv60igiouJUmyMR1Z9uIvymIqONlVTY6m8qsb/nBuNP6ObeP/432+N7r+pz0jTNPyjtdX1n+Eej6EojymPx/SHONuWyitrVFntU0z952JdzbZqLcv/XvHXVX+/HtNUcvs4dWgXq4qqWh0qrfsM99Xfpu71W9c/7drWfYZUVft0qLRKNbX1AbMhJNVfb8B/d9YFfbtKkn7x352V9/V+lVXUqOYH/iBuSpRpqn27GMVGe3SwpEol5dVH/iAxjvyR3vj7yd9mGkd9XwV+bxnGkT9wGr+eK6pqdbisusk/RlqDaRpqGxetqChT5ZU1qqr2+esyTSk9pa3/td7wv5siciQvLy9P99xzj5YvX+6/bOjQoZo1a5Z+9rOfBbUNJ0fyJDma6tF89Et4ol/CE/0SfuiT8BSqkbzwnSzTAl6vV4WFhfL56obEfT6f9u7dK6/X3flbAAAAoRKRIS8lJUUZGRlatmyZJGnZsmXKyMgIelctAADAT11EzsmTpGnTpmnKlCl66qmnlJiYqNzc3FCXBAAA4JqIDXmnnHKKXnzxxVCXAQAAEBIRubsWAADgREfIAwAAiECEPAAAgAhEyAMAAIhAhDwAAIAIFLFH17aUGye85qTa4Yl+CU/0S3iiX8IPfRKenOqXH9puRJ7WDAAA4ETH7loAAIAIRMgDAACIQIQ8AACACETIAwAAiECEPAAAgAhEyAMAAIhAhDwAAIAIRMgDAACIQIQ8AACACETIc9nOnTs1evRoZWdna/To0frmm29CXVLEOHDggG6++WZlZ2drxIgRmjBhgoqLiyVJmzdv1siRI5Wdna2xY8eqqKjIfzsn2tC0J598Uj179tT27dsl0S+hVlVVpZycHF1yySUaMWKEfv/730v64c8pJ9oQ6J133tGvfvUrjRo1SiNHjtSqVask0S9uy83N1aBBgwI+syT3+6FFfWTDVdddd5392muv2bZt26+99pp93XXXhbiiyHHgwAH73//+t//3P/zhD/bvfvc72+fz2RdddJG9fv1627Zte968efaUKVNs27YdaUPT8vLy7BtvvNG+8MIL7S+++IJ+CQMzZsywH3roIduyLNu2bXvfvn22bf/w55QTbTjCsiw7KyvL/uKLL2zbtu2tW7favXv3tn0+H/3isvXr19v5+fn+z6wGbvdDS/qIkOei/fv325mZmXZtba1t27ZdW1trZ2Zm2kVFRSGuLDKtWLHC/s1vfmN/8skn9rBhw/yXFxUV2b1797Zt23akDd9XVVVlX3311fauXbv8H5j0S2iVlpbamZmZdmlpacDlP/Q55UQbAlmWZffr18/esGGDbdu2vW7dOvuSSy6hX0Kocchzux9a2kdRxz+AieYqKChQWlqaPB6PJMnj8ahTp04qKChQcnJyiKuLLJZl6R//+IcGDRqkgoICde7c2d+WnJwsy7J08OBBR9qSkpLceZA/IXPnztXIkSPVtWtX/2X0S2jt2rVLSUlJevLJJ7V27Vq1bdtWkyZNUlxc3DE/p2zbbvU2PvsCGYahxx57TOPHj1d8fLzKysr09NNP/+D3B/3iHrf7oaV9xJw8RKQZM2YoPj5e1157bahLOeFt2rRJeXl5uuaaa0JdChrx+XzatWuXzjzzTL3yyiuaPHmybrvtNpWXl4e6tBNabW2tFixYoKeeekrvvPOO/vSnP+n222+nX9AsjOS5yOv1qrCwUD6fTx6PRz6fT3v37pXX6w11aRElNzdX3377rebPny/TNOX1epWfn+9vLy4ulmmaSkpKcqQNgdavX68dO3Zo8ODBkqQ9e/boxhtv1HXXXUe/hJDX61VUVJSGDx8uSTrrrLPUoUMHxcXFHfNzyrbtVm9DoK1bt2rv3r3KzMyUJGVmZqpNmzaKjY2lX8LAD32PO9EPLe0jRvJclJKSooyMDC1btkyStGzZMmVkZDAs3ormzJmjvLw8zZs3TzExMZKkXr16qbKyUhs2bJAkvfDCCxoyZIhjbQh0yy236IMPPtDq1au1evVqpaena+HChbrpppvolxBKTk5W//799eGHH0qqO4KvqKhIPXr0OObn1A99hjW3DYHS09O1Z88eff3115KkHTt2qKioSN27d6dfwoATz7WTfWTYtm239pOAY9uxY4emTJmiw4cPKzExUbm5uTr55JNDXVZE+PLLLzV8+HD16NFDcXFxkqSuXbtq3rx52rhxo3JyclRVVaUuXbpo1qxZ6tixoyQ50oZjGzRokObPn6/TTz+dfgmxXbt26d5779XBgwcVFRWl22+/Xeeff/4Pfk450YZAr7/+uv785z/LMAxJ0sSJE3XRRRfRLy578MEHtWrVKu3fv18dOnRQUlKSli9f7no/tKSPCHkAAAARiN21AAAAEYiQBwAAEIEIeQAAABGIkAcAABCBCHkAAAARiJAHIGJMmTJFjz76aEju27Zt/e53v9PZZ5+tK6+8MiQ1OGnq1KmaN29eqMsAcBw44wUAxwwaNEgVFRV6++23FR8fL0l68cUX9frrr2vRokUhrq51ffzxx/rwww/17rvv+h9rY6+88oruu+8+/xqOHTp0UP/+/XXLLbfov/7rv4K6jylTpigtLU133HFHq9YejOnTpwd93VDWCeAIRvIAOMqyLD3//POhLuO4+Xy+47r+7t271aVLlyYDXoPevXtr06ZN2rBhg/76178qNjZWl19+ubZv397ScgHgewh5ABx144036plnntHhw4e/1/bdd9+pZ8+eqq2t9V923XXX6cUXX5RUN/o1ZswYPfzww8rKytLgwYO1ceNGvfLKKzr//PM1YMAAvfrqqwHbPHDggG644Qb16dNH1157rXbv3u1v27Fjh2644Qb169dP2dnZeuONN/xtU6ZMUU5Ojm6++Wb17t1ba9eu/V69hYWFGjdunPr166eLL75Y//znPyXVjU7ef//92rx5s/r06aPHH3/8B58Tj8ejk046SdOmTVO/fv305JNP+tsmTpyoc889V5mZmfqf//kfffnll5Kk//u//9PSpUu1cOFC9enTR+PGjZMkPf3007rooovUp08fDR06VG+++eYx7/eJJ57QxIkTdfvtt6tPnz667LLLtG3btoDn57rrrlNWVpaGDRumt99+O+D5adgVvnbtWv3yl7/UM888owEDBmjgwIF6+eWXf7TO8847T3369FF2drbWrFnzg88RgJYj5AFwVK9evdSvXz8tXLiwWbffsmWLevbsqbVr12r48OG688479emnn+rNN9/UrFmzNH36dJWVlfmvv3TpUo0fP15r167VGWecocmTJ0uSysvLNXbsWA0fPlwfffSRHn30UT3wwAP66quv/LddtmyZxo0bp40bN/pPEN/YnXfeqfT0dL3//vt6/PHHNWfOHK1Zs0ZXXXWVHnjgAf9I3cSJE4N+fBdffLH/XLuS9Mtf/lIrV67UmjVrdOaZZ/rrHz16tEaMGKEbb7xRmzZt0vz58yVJ3bp10+LFi/Xxxx9rwoQJuuuuu7R3795j3t/bb7+tIUOGaN26dRo+fLjGjx+vmpoa1dTUaNy4cTr33HP10Ucf6f7779fkyZP951A92v79+1VSUqL33ntPDz30kKZPn65Dhw41WefXX3+txYsX66WXXtKmTZu0cOFCdenSJejnCEDzEPIAOG7ixIn629/+puLi4uO+bdeuXXXFFVfI4/Fo6NChKigo0K233qqYmBgNHDhQMTEx+s9//uO//gUXXKCzzz5bMTExuuOOO7R582YVFBToX//6l7p06aIrrrhCUVFROvPMM5Wdna0VK1b4bzt48GBlZmbKNE3FxsYG1FFQUKCNGzdq8uTJio2NVUZGhq666iotWbKk+U+MpE6dOunQoUP+36+88kolJCQoJiZGt912m7Zt26aSkpJj3v7SSy9VWlqaTNPU0KFD1b17d23ZsuWY1//Zz36mIUOGKDo6WjfccIOqq6v1ySef6JNPPlF5ebluueUWxcTEaMCAAbrwwgu1fPnyJrcTFRWlW2+9VdHR0Tr//PMVHx+vnTt3Nnldj8ej6upq7dixQzU1NeratatOOumkIJ8hAM3FgRcAHHf66afrggsu0NNPP61TTjnluG6bkpLi/7nhoIWOHTv6L4uNjQ0YyUtPT/f/3LZtW7Vv31579+7V7t27tWXLFmVlZfnbfT6fRo4c6f/d6/Ues469e/eqffv2SkhI8F/WuXNn5eXlHdfjOVphYaHat2/vr+fRRx/VihUrVFxcLNOs+zv8wIEDateuXZO3f+211/Tss8/6d0uXl5frwIEDx7y/xs+PaZpKS0vzj/ylp6f777Ph8RUWFja5naSkJEVFHfkKadOmjcrLy5u8bvfu3XXvvffqiSee0FdffaWBAwf6D84A4BxCHgBXTJw4UZdddpnGjh3rv6zhIIXKykp/eNq3b1+L7mfPnj3+n8vKynTo0CF16tRJXq9XZ599tp599tlmbbdhxK20tNRfa0FBQYuDyltvveUPnkuXLtXbb7+tZ599Vl27dlVJSYnOPvts2bYtSTIMI+C2u3fv1v3336+//vWv6tOnjzwej0aNGvWD99f4+bEsS4WFherUqZO/zbIsf9ArKChQjx49jvsxHV2nJI0YMUIjRoxQaWmppk6dqtmzZ2vWrFnHvW0AwWN3LQBXdO/eXUOHDg1YOiU5OVlpaWlasmSJfD6fXnrpJe3atatF9/Puu+9qw4YNqq6u1ty5c3XWWWfJ6/Xqggsu0DfffKPXXnvNPwdty5Yt2rFjR1Db9Xq96tOnj+bMmaOqqipt27ZNL730UsBIYLB8Pp927dqlGTNmaN26dbr11lsl1YXSmJgYdejQQRUVFZozZ07A7VJSUvTdd9/5f6+oqJBhGEpOTpYkvfzyy/4DNY7ls88+06pVq1RbW6vnnntOMTExOuuss/Tzn/9ccXFx+stf/qKamhqtXbtWq1ev1tChQ4/78R1d59dff601a9aourpaMTExio2NDRgxBOAM3mUAXHPrrbd+b5fejBkztHDhQvXv319fffWV+vTp06L7GD58uObNm6f+/fvrs88+848WJSQkaOHChXrjjTd03nnnaeDAgZo9e7aqq6uD3vacOXO0e/dunXfeeZowYYJuu+02nXPOOUHfvuHo28zMTF1//fUqLS3VSy+9pJ49e0qSfvWrX6lz584677zzNGzYMPXu3Tvg9ldeeaW++uorZWVlafz48Tr11FM1duxYjRkzRuecc462b9+uvn37/mANgwcP1htvvKGzzz5bS5Ys0RNPPKHo6GjFxMRo/vz5eu+99/SLX/xCDzzwgGbOnHncu9ebqrO6ulp//OMf1b9/fw0cOFDFxcW68847j3u7AI6PYTfsBwAARLQnnnhC3377rWbPnh3qUgC4gJE8AACACETIAwAAiEDsrgUAAIhAjOQBAABEIEIeAABABCLkAQAARCBCHgAAQAQi5AEAAEQgQh4AAEAE+v+rSKgEV1THeAAAAABJRU5ErkJggg==)![download (2).png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAnIAAAFSCAYAAAB2ajI+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nOzdeXxU9b038M85s2afJIRkAhGUzQgqS5RqpSqo4Epbr5XLrTza6621ilYvWr1Y8KJcylKq9EHR1ttWS8tTpRhRJKioUBVklR1kX7KQPZlMZjvnPH/MnJOZLGQS5mTOhM/79eIVMjOZ+U1+ycnnfH/LERRFUUBERERECUeMdwOIiIiIqHsY5IiIiIgSFIMcERERUYJikCMiIiJKUAxyRERERAmKQY6IiIgoQTHIEZGhzZo1C0uXLo13MzSlpaUYNWoUJEnS/bXuu+8+vP3227q/DhElLnO8G0BEF67x48ejqqoKJpMJJpMJgwcPxuTJk3HvvfdCFIPnmXPmzIlzKyPl5+djx44d8W5Gt23atAlLly7Fvn37kJGRgfXr10fcf/r0aTz77LPYtWsXnE4nZs2ahWuvvTZOrSWizrAiR0RxtWzZMuzYsQOffvop/uM//gO///3vMXPmzHg3q9dKTk7G3Xffjaeffrrd+//zP/8Tl112GTZv3ownnngCjz32GGpqanq4lUQULQY5IjKEtLQ0TJgwAS+99BJWrVqFQ4cOAQCeeeYZ/Pa3vwUAbN68Gd/73vfw+9//Htdccw2uu+46fPzxx/j8888xceJEXH311Vi2bJn2nLIs4/XXX8dNN92EsWPH4vHHH0ddXR2AYOVp2LBhWLVqFW644QaMHTsWr776qva1u3btwg9/+EOMHj0a1157LebNmxfxdYFAAABQUVGBn/3sZ7j66qtx88034+9//7v2HL/73e/w+OOP4+mnn8aoUaNw++23Y/fu3R1+D7744gtMmjQJY8aMwZw5c6DHhXeuuOIKfP/730dBQUGb+44dO4a9e/di+vTpsNvtmDhxIoYOHYqSkpKYt4OIYoNBjogM5YorrkBeXh62bt3a7v1VVVXwer3YsGEDHnvsMTz33HN47733sHLlSixfvhyvvPIKTp06BQB466238PHHH+Mvf/kLNm7ciIyMjDZDtdu2bcPatWvx5z//GUuXLsWRI0cAAHPnzsW0adOwfft2fPTRR7j11lvbbc+TTz6JvLw8bNy4EUuWLMHixYvx1VdfafevX78et99+O7Zu3Yrx48fjhRdeaPd5ampq8Oijj+IXv/gFNm3ahIsuugjbt2/v8Pu0evVqFBUVdfivtLS0429yBw4fPoyCggKkpqZqt1166aU4fPhwl5+LiHoGgxwRGU7fvn1RX1/f7n1msxkPP/wwLBYLbrvtNtTW1mLatGlITU3FkCFDMHjwYBw8eBAAsGLFCjzxxBPIy8uD1WrFo48+ipKSEq2aBgCPPvoo7HY7Lr30Ulx66aU4cOCA9jonT55ETU0NUlJSMHLkyDZtKSsrw/bt2zFjxgzYbDYUFhbinnvuQXFxsfaYMWPG4Prrr4fJZMLkyZO1529tw4YNGDJkCCZNmgSLxYL/83/+D/r06dPh9+jOO+/E1q1bO/yXn5/f+Te6laamJqSlpUXclpaWhqampi4/FxH1DC52ICLDqaioQEZGRrv3ORwOmEwmAIDdbgcAZGdna/fbbDYteJSWluKRRx7RFk4AgCiKqK6u1j4PD0tJSUlwu90AghW5JUuW4NZbb0X//v3x6KOP4sYbb4xoy9mzZ5GRkRFRwcrPz8eePXvafX673Q6v14tAIACz2dzmufLy8rTPBUGA0+ls93ugl5SUFLhcrojbXC4XUlJSerQdRBQ9BjkiMpRdu3ahoqICY8aMOe/nysvLw//8z/+0+1ynT58+59cOHDgQixcvhizLWLduHR577DFs3rw54jFq5dDlcmlhrqysDLm5uV1ua05ODsrLy7XPFUVBWVlZh49/7733MHv27A7v/+CDD7pclRs8eDBOnToV8X4OHDiAO+64o0vPQ0Q9h0OrRGQILpcLn376KZ588kncddddGDZs2Hk/57/+67/ipZdewpkzZwAE56F9/PHHUX1tcXExampqIIoi0tPTASCisgcATqcTo0aNwuLFi+H1enHgwAG88847uOuuu7rc1uuvvx7ffvst1q1bh0AggDfffBNVVVUdPv6uu+7Cjh07OvzXUYiTZRlerxd+vx+KosDr9cLn8wEALr74YhQWFmLp0qXwer346KOPcPDgQUycOLHL74eIegYrckQUVz/72c9gMpkgiiIGDx6MBx54AFOmTInJc0+bNg2KouAnP/kJzp49i+zsbNx222246aabOv3ajRs34te//jU8Hg/y8/Px29/+VhvKDbd48WLMnj0b48aNQ3p6OqZPn96tfdeysrLw8ssvY+7cuXj22WcxefJkjB49usvP05ktW7Zg2rRp2udXXHEFrr76arz11lsAgu/n2WefxVVXXQWn04klS5YgKysr5u0gotgQFD3WtxMRERGR7ji0SkRERJSgGOSIiIiIEhSDHBEREVGCYpAjIiIiSlAMckREREQJikGOiIiIKEFd0PvI1dY2QZb1230lOzsV1dWuzh9IPYr9YkzsF2NivxgP+8SY9OwXURSQmdn+pfJ6LMjNnz8fJSUlOHPmDFavXo2hQ4fi9OnTeOSRR7THNDY2wuVy4euvvwYAjB8/HlarFTabDQAwY8YMjBs3DgCwc+dOzJo1C16vF/369cPChQsjrrcYDVlWdA1y6muQ8bBfjIn9YkzsF+NhnxhTPPqlx4LchAkTMG3aNPzbv/2bdlv//v1RXFysfT537lxIkhTxdUuWLMHQoUMjbpNlGU899RTmzZuHoqIivPLKK1i0aBHmzZun75sgIiIiMpAemyNXVFQEp9PZ4f0+nw+rV6/G3Xff3elz7dmzBzabDUVFRQCAKVOmYO3atTFrKxEREVEiMMwcufXr1yM3NxfDhw+PuH3GjBlQFAVjxozBk08+ifT0dJSVlUVcEDorKwuyLKOurg4OhyPq18zOTo1Z+zuSk5Om+2tQ17FfjIn9YkzsF+NhnxhTPPrFMEFu5cqVbapxy5cvh9PphM/nw9y5czFnzhwsWrQoZq9ZXe3SdTw7JycNlZWNuj0/dQ/7xZjYL8bEfjEe9okx6dkvoih0WHwyxPYjFRUV2LJlC+68886I29WhWKvViqlTp2L79u3a7aWlpdrjampqIIpil6pxRERERInOEEFu1apVuP7665GZmand5na70dgYTLaKomDNmjUoLCwEAIwYMQIejwdbt24FAKxYsQKTJk3q+YYTERERxVGPDa2++OKLWLduHaqqqvDAAw/A4XDggw8+ABAMcjNnzox4fHV1NaZPnw5JkiDLMgYNGoTZs2cDAERRxIIFCzB79uyI7UeIiIiILiSCoigX7GY0nCN3YWK/GBP7xZjYL8bDPjGmC3qOHBEREfVOR0rr8Zv/txMBSY53U3olBjkiIiLSzbHSBuw9VgO3JxDvpvRKDHJERESkG3UC1wU8k0tXDHJERESkGzXA8fKw+mCQIyIiIt3IrMjpikGOiIiIdKMgGOCY4/TBIEdERES6Ubf5kpnkdMEgR0RERLrhYgd9McgRERGRbtQAxxynDwY5IiIi0o0a4Di0qg8GOSIiItKNzIqcrhjkiIiISDecI6cvBjkiIiLSDbcf0ReDHBEREemGc+T0xSBHREREulH3kWOO0weDHBEREemGFTl9McgRERGRbrhqVV8MckRERKQbrlrVF4McERER6YZXdtAXgxwRERHphnPk9MUgR0RERLqRtX3kGOT0wCBHREREummZIxffdvRWDHJERESkm5Z95Jjk9MAgR0RERLpRA5wc53b0VgxyREREpBttaFVmRU4PDHJERESkG60ixxynCwY5IiIi0o3MDYF11WNBbv78+Rg/fjyGDRuGQ4cOabePHz8ekyZNwuTJkzF58mRs3LhRu2/nzp246667MHHiRPzkJz9BdXV1VPcRERGRMSjghsB66rEgN2HCBCxfvhz9+vVrc9+SJUtQXFyM4uJijBs3DgAgyzKeeuopzJo1CyUlJSgqKsKiRYs6vY+IiIiMg5fo0lePBbmioiI4nc6oH79nzx7YbDYUFRUBAKZMmYK1a9d2eh8REREZB+fI6csc7wYAwIwZM6AoCsaMGYMnn3wS6enpKCsrQ35+vvaYrKwsyLKMurq6c97ncDji8RaIiIioHZwjp6+4B7nly5fD6XTC5/Nh7ty5mDNnTo8Nk2Znp+r+Gjk5abq/BnUd+8WY2C/GxH4xnkTqE4vFBABIS7cnVLu7Ix7vL+5BTh1utVqtmDp1Kh5++GHt9tLSUu1xNTU1EEURDofjnPd1RXW1S9txWg85OWmorGzU7fmpe9gvxsR+MSb2i/EkWp94PH4AQH19c0K1u6v07BdRFDosPsV1+xG3243GxuCbVhQFa9asQWFhIQBgxIgR8Hg82Lp1KwBgxYoVmDRpUqf3ERERkXGoI6p6Fk4uZD1WkXvxxRexbt06VFVV4YEHHoDD4cCyZcswffp0SJIEWZYxaNAgzJ49GwAgiiIWLFiA2bNnw+v1ol+/fli4cGGn9xEREZFxqHPjOEVOH4JyAc8+5NDqhYn9YkzsF2NivxhPovXJS29/g11HqvHvtxfiu5dHv3tForkgh1aJiIiod5NZkdMVgxwRERHphhsC64tBjoiIiHSjzZGLczt6KwY5IiIi0o06F11mRU4XDHJERESkm5ah1fi2o7dikCMiIiLdtGw/wiSnBwY5IiIi0o2sfuSGwLpgkCMiIiLdcENgfTHIERERkW64/Yi+GOSIiIhIN2qA48iqPhjkiIiISDdqgFO4k5wuGOSIiIhIN4rMOXJ6YpAjIiIi3cicI6crBjkiIiLSjTqkyjly+mCQIyIiIt1oq1aZ5HTBIEdERES6aVm1yiCnBwY5IiIi0o3Ma63qikGOiIiIdKNd2YHbj+iCQY6IiIh0w0t06YtBjoiIiHQjy6GPTHK6YJAjIiIi3ahDqsxx+mCQIyIiIt0o3BBYVwxyREREpBuZc+R0xSBHREREulEDnMwNgXXBIEdERES64apVfTHIERERkW60ihz3kdMFgxwRERHphhU5fTHIERERkW5krlrVlbmnXmj+/PkoKSnBmTNnsHr1agwdOhS1tbV4+umncfLkSVitVgwYMABz5sxBVlYWAGDYsGEYOnQoRDGYNxcsWIBhw4YBANavX48FCxZAkiQMHz4c8+bNQ1JSUk+9HSIiIopCy6pVBjk99FhFbsKECVi+fDn69eun3SYIAh588EGUlJRg9erVKCgowKJFiyK+bsWKFSguLkZxcbEW4pqamvCrX/0Ky5Ytw0cffYSUlBS88cYbPfVWiIiIKEpqgOOiVX30WJArKiqC0+mMuM3hcGDs2LHa5yNHjkRpaWmnz7VhwwaMGDECAwcOBABMmTIFH374YUzbS0REROePGwLrq8eGVjsjyzL+9re/Yfz48RG333fffZAkCd/73vcwffp0WK1WlJWVIT8/X3tMfn4+ysrKuvya2dmp593uzuTkpOn+GtR17BdjYr8YE/vFeBKpT9T8ZrNZEqrd3RGP92eYIPfCCy8gOTkZP/7xj7XbPvvsMzidTrhcLjz11FNYunQpnnjiiZi9ZnW1S9cNCnNy0lBZ2ajb81P3sF+Mif1iTOwX40m0PlErce5mX0K1u6v07BdRFDosPhli1er8+fNx4sQJvPTSS9rCBgDaUGxqairuuecebN++Xbs9fAi2tLS0zbAtERERxR8v0aWvuAe5xYsXY8+ePVi6dCmsVqt2e319PTweDwAgEAigpKQEhYWFAIBx48Zh9+7dOH78OIDggohbb721x9tORERE58Y5cvrqsaHVF198EevWrUNVVRUeeOABOBwOvPTSS3jttdcwcOBATJkyBQDQv39/LF26FEePHsWsWbMgCAICgQBGjRqFxx9/HECwQjdnzhw89NBDkGUZhYWFmDlzZk+9FSIiIopCeHjjqlV9CMoFHJE5R+7CxH4xJvaLMbFfjCeR+kSSZfzHgs8AAGOG5eCRH1we3wbp6IKeI0dERES9T3ip6MItG+mLQY6IiIh0ET7odwEPAOqKQY6IiIh0IbMipzsGOSIiItJF5GIHJjk9MMgRERGRLsKzG4OcPhjkKCGdqXQhIMnxbgYREZ1D5By5ODakF2OQo4Tj9gTw/B+34Ov9FfFuChERnUPkHDkmOT0wyFHC8folSLICtycQ76YQEdE5yKzI6Y5BjhKOelbHXcKJiIxNYUVOdwxylHDUq3HoeVUOIiI6f7xEl/4Y5CjhSFpFjkcFIiIjY0VOfwxylHBYkSMiSgxctao/BjlKOGp+Y0WOiMjYZF6iS3cMcpRwFFbkiIgSAjcE1h+DHCUcmatWiYgSAhc76C/qIPfiiy+2e/vcuXNj1hiiaKhBjmV6IiJj44bA+os6yP3jH/9o9/b33nsvZo0hioYsqx95UCAiMjIudtCfubMHvPPOOwAASZK0/6tOnToFh8OhT8uIOiBz+xEiooTAipz+Og1yxcXFAAC/36/9HwAEQUCfPn0wf/58/VpH1I6W7Ufi3BAiIjonVuT012mQe+uttwAAv/3tb/HEE0/o3iCiziisyBERJQT1MC0KAo/ZOuk0yKnUEFddXQ232x1xX0FBQWxbRXQOErcfISJKCOqJt8kksCKnk6iD3MaNG/Ff//VfqKysjLhdEATs378/5g0j6gjnyBERJQatIieyIqeXqIPcf//3f+PnP/85fvCDH8But+vZJqJz4qpVIqLEoIY3kyBwsYNOog5yDQ0NmDJlCgRB0LM9RJ1iRY6IKDGox2lR5NCqXqLeR+7uu+/GypUr9WwLUVQUrlolIkoIangziazI6SXqitw333yDt956C7///e/Rp0+fiPuWL18e84YRdYRXdiAiSgxKWEWOs2H0EXWQu+eee3DPPffo2RaiqKgHAw6tEhEZW3hFLiBxGEUPUQe5H/zgB91+kfnz56OkpARnzpzB6tWrMXToUADAsWPH8Mwzz6Curg4OhwPz58/HwIEDz+s+6v1kbj9CRJQQwitySiDOjemlop4jpygK/v73v2PatGm48847AQBbtmzBmjVrOv3aCRMmYPny5ejXr1/E7bNnz8bUqVNRUlKCqVOnYtasWed9H/V+LYsd4twQIiI6J5lz5HQXdZB7+eWX8c477+Dee+9FWVkZACAvLw9/+MMfOv3aoqIiOJ3OiNuqq6uxb98+3HHHHQCAO+64A/v27UNNTU2376MLAytyRESJQdsQWBR58q2TqIdWV61ahVWrViErKwvPP/88AKB///44depUt164rKwMubm5MJlMAACTyYS+ffuirKwMiqJ0676srKwutSE7O7Vbbe+KnJw03V/jQpOSEgztZoup299f9osxsV+Mif1iPInSJ+k1zQAAm80EuBKn3d0Vj/cXdZCTJAkpKSkAoO0l19TUhOTkZH1a1gOqq126VnVyctJQWdmo2/NfqOobggcGj8ffre8v+8WY2C/GxH4xnkTqk7q64CU9ZUmBJCsJ0+7u0LNfRFHosPgU9dDq9ddfj3nz5sHn8wEIlktffvll3Hjjjd1qlNPpREVFBSRJAhAMimfPnoXT6ez2fXRh4KpVIqLEoF3ZgXPkdBN1kHv22WdRWVmJMWPGoLGxEaNGjUJpaSlmzJjRrRfOzs5GYWEh3n//fQDA+++/j8LCQmRlZXX7ProwcI4cEVFiUA/TvLKDfqIeWk1NTcXSpUtRVVWF0tJSOJ1O5OTkRPW1L774ItatW4eqqio88MADcDgc+OCDD/D888/jmWeewSuvvIL09HTMnz9f+5ru3ke9H1etEhElBoUVOd1FHeRUdrsdubm5kGUZFRUVAIDc3Nxzfs1zzz2H5557rs3tgwYNwttvv93u13T3Pur9tEt08aBARGRoSlhFjiff+og6yH355Zf41a9+hdLS0ohULQgC9u/fr0vjiNqjHgwUHhWIiAyNFTn9RR3kZs6ciZ///Oe47bbbYLfb9WwT0Tm1DK3yoEBEZGRKxIbA8W1LbxV1kPN6vfjhD3+o7d9GFC8tix3i3BAiIjonrlrVX9SrVu+//3784Q9/YEdQ3KkHBolDq0REhhY+R04BmCF0EHVF7pZbbsG///u/47XXXkNmZmbEfZ988knMG0bUEbUixwMCEZGxhVfkgGCwC11TgGIk6iD32GOPoaioCJMmTeIcOYorzpEjIkoM6gm3GApysqJABJNcLEUd5E6fPo13330Xohj1aCyRLtS5cdwQmIjI2MIXO4R/TrETdSqbMGECNm3apGdbiKLCihwRUWKQtYpcMG5wSkzsRV2R8/l8ePjhh1FUVITs7OyI+xYsWBDzhhF1hKtWiYgSg1aRE1iR00vUQW7IkCEYMmSInm0higorckREiUHbENjUMkeOYivqIPfoo4/q2Q6iqCnqHDkeEIiIDI1z5PTXpWutfvHFF/jggw9QU1ODZcuWYffu3XC5XLjmmmv0ah9RG2qA4yW6iIiMrfWqVQU8bsda1Isd3nrrLTz//PMYOHAgtmzZAgCw2+14+eWXdWscUXtahlbj3BAiIjonuVVFjrsNxF7UQe7Pf/4z/vjHP+KnP/2ptvrkkksuwbFjx3RrHFF71Cs68MoORETGJreuyPGwHXNRB7mmpiY4nU4AgBBafRIIBGCxWPRpGVEH1CFVzpEjIjK2tnPkeNyOtaiD3FVXXYXXX3894rY333wTY8eOjXmjiM6Fc+SIiBJD2ys7xLM1vVPUix2ee+45/OxnP8Pbb7+NpqYmTJw4ESkpKXjttdf0bB9RG+qBgBU5IiJja7uPHI/bsRZ1kOvbty9WrlyJXbt2obS0FE6nE1dccQUv2UU9jhsCExElhpZ95NQrO8SzNb1T1EHuT3/6E+644w5ceeWVuPLKK/VsE9E5cUNgIqLEoB6nOUdOP1GX077++mtMmDAB999/P1auXAmXy6Vnu4g6FL58nWGOiMi41EO0Nkcujm3praIOcq+88go2btyI22+/HcXFxbjuuuswffp0rFu3Ts/2EbURnt24JxERkXGxIqe/Lk1wS09Pxz333IM333wTa9asQVNTEx5//HG92kbUrvAqHA8KRETG1aYix5PvmOvSJboAYOvWrfjggw9QUlICh8OB6dOn69Euog5FDK2yTk9EZFja9iMCNwTWS9RBbv78+Vi7di0EQcCtt96KN954A4WFhXq2jahd4Vd04NUdiIiMS1YAQQj+AziKooeog1xzczMWLlyIoqIiPdtD1KnwAwEXOxARGZeiKBAFgRU5HUUd5J5//nkAQGlpKSoqKpCbm4v8/Hy92kXUIa5aJSJKDEqrihyP2bEXdZCrrKzEE088gZ07d8LhcKCurg5XXnklFi9ejNzcXD3bSBQhfDSVl+kiIjIuRVEgCIJ2jXbmuNiLOsjNnj0bl156KV5//XUkJyfD7XZj8eLFmD17NpYtW9btBpw+fRqPPPKI9nljYyNcLhe+/vprjB8/HlarFTabDQAwY8YMjBs3DgCwc+dOzJo1C16vF/369cPChQuRnZ3d7XZQ4pAjhlbj2BAiIjonRUHk0Cp40I61qIPctm3b8PLLL8NisQAAkpOT8fTTT2vBqrv69++P4uJi7fO5c+dCkiTt8yVLlmDo0KERXyPLMp566inMmzcPRUVFeOWVV7Bo0SLMmzfvvNpCiSFy1SoPCkRERiUrSqvFDvFtT28U9T5yGRkZOHLkSMRtR48eRXp6eswa4/P5sHr1atx9993nfNyePXtgs9m0hRdTpkzB2rVrY9YOMjaZix2IiBKC3GpolSffsRd1Re7BBx/E/fffj3/5l39Bfn4+SktL8Y9//COmGwKvX78eubm5GD58uHbbjBkzoCgKxowZgyeffBLp6ekoKyuLWGiRlZUFWZZRV1cHh8MR9etlZ6fGrO0dyclJ0/01LjSi2HL+4chMRk6frvcj+8WY2C/GxH4xnkTpE7vdApMoINORDADIyEhOmLZ3RzzeW9RB7kc/+hEKCgrw/vvv4+DBg+jbty9+85vf4JprrolZY1auXBlRjVu+fDmcTid8Ph/mzp2LOXPmYNGiRTF7vepql65nBzk5aaisbNTt+S9Ufn/L0HtVlQuWLlbl2C/GxH4xJvaL8SRSn7jdPgBAQ0MzAKCmtgmVqZZ4Nkk3evaLKAodFp+iCnKSJGHixIlYs2ZNTINbuIqKCmzZsgULFizQbnM6nQAAq9WKqVOn4uGHH9ZuLy0t1R5XU1MDURS7VI2jxMXFDkREiaH19iPcEDj2opojZzKZYDKZ4PV6dWvIqlWrcP311yMzMxMA4Ha70dgYTLaKomDNmjXalSRGjBgBj8eDrVu3AgBWrFiBSZMm6dY2MhZJVmA2cb4FEZHRcfsR/UU9tDpt2jT84he/wEMPPYS8vDytUwCgoKDgvBuyatUqzJw5U/u8uroa06dPhyRJkGUZgwYNwuzZswEE50gtWLAAs2fPjth+hC4MsqzAbBIRkCQGOSIiA1Mv0SWyIqebqIPcCy+8AAD44osvIm4XBAH79+8/74aUlJREfF5QUIB33323w8ePHj0aq1evPu/XpcSjKMEgB0hctUpEZGDqJbq0Vatxbk9vFHWQO3DggJ7tIIqarAAmdWiVQY6IyLCCGwIj7FqrPGbHWtT7yKkqKiqwa9cuVFRU6NEeok7JsgJzaAsShad3RESG1bKPXPBz5rjYi7oiV1paihkzZmDnzp3IyMhAfX09Ro4ciYULF6Jfv356tpEogqyELXbgUYGIyLAU7coOXKCml6grcr/85S8xfPhwbN26FV999RW2bNmCESNG4JlnntGzfURtqIsd1P8TEZExBbcfYUVOT1FX5Pbu3Yv//d//1a61mpKSghkzZmDs2LG6NY6oPZwjR0SUGNShVc6R00/UFbmRI0di165dEbft2bMHo0aNinmjiM5FVsIqcjwoEBEZlrrYQa3IcRAl9qKuyBUUFOCnP/0pbrjhBuTl5aG8vByff/MHOGgAACAASURBVP457rjjDrz88sva42J57VWi9iiyArOozreIc2OIiKhDbTcEZpKLtaiDnM/nwy233AIgeEksq9WKm2++GV6vF+Xl5bo1kKg1SVZgNnOOHBGR0bW5RFd8m9MrRR3k5s2bp2c7iKLGoVUiosQghzYE5hw5/UQd5ACgubkZJ06cgNvtjrh99OjRMW0UUUcURYGigKtWiYgSQOuKHE++Yy/qIPfuu+9izpw5sFgssNvt2u2CIOCzzz7To21EbajHAO4jR0RkfG1Xrca5Qb1Q1EFu4cKF+N3vfofvfve7eraH6JzU4GYSObRKRGR0bVatchQl5qLefsRiseDqq6/Wsy1EnVIPAmpFjpfoIiIyrrarVuPcoF4o6iD3+OOP49e//jVqamr0bA/ROakVOC52ICIyvpZLdLV8TrEV9dDqwIEDsWTJEvz1r3/VblOT9v79+3VpHFFr6r5x2pUdWKYnIjIsOXSJLm2OXJzb0xtFHeSefvppTJ48GbfddlvEYgeinsSKHBFR4lAUBSKgDa3ymB17UQe5uro6PP7441pnEMVDS5BTDwrxbA0REZ2LogCiKIQNrca3Pb1R1HPkfvjDH6K4uFjPthB1qmWxA/eRIyIyurbbj/CYHWtRV+R27dqFv/zlL3j11VfRp0+fiPuWL18e84YRtYdBjogocbS5RBcP2TEXdZD70Y9+hB/96Ed6toWoU5wjR0SUOFpvP8Jjdux1GuS++uorAEBeXp7ujSHqjMwrOxARJQw5VJET1YocR1FirtMgN3PmzHPeLwgCPvnkk5g1iOhcFA6tEhElDEVRIEZU5OLcoF6o0yC3fv36nmgHUVRaLtHFgwIRkdEpCiAgbI4cd5KLuahXrRIZQevFDizTExEZFy/RpT8GOUooam5rqcjxqEBEZFRt5sjxmB1zDHKUUNSKnLrBJIMcEZFxKYoSOl5zOoxeGOQooajBTRQEmEQBEo8KRESGxQ2B9Rf1PnJ6Gj9+PKxWK2w2GwBgxowZGDduHHbu3IlZs2bB6/WiX79+WLhwIbKzswHgnPdR79VSkQuGOUWOc4OIiKhDihIcVuWGwPoxTEVuyZIlKC4uRnFxMcaNGwdZlvHUU09h1qxZKCkpQVFRERYtWgQA57yPerfwipwgChxaJSIysLaLHXjMjjXDBLnW9uzZA5vNhqKiIgDAlClTsHbt2k7vo94tfI6cKAjcR46IyMDUS3QB4LxmnRhiaBUIDqcqioIxY8bgySefRFlZGfLz87X7s7KyIMsy6urqznmfw+GIR/Oph6i5TRQEiDwoEBEZmqwoEBBMcqIgcGhVB4YIcsuXL4fT6YTP58PcuXMxZ84c3Hzzzbq/bnZ2qu6vkZOTpvtrXEjO1DYDALKyUmA2i7DZLN36HrNfjIn9YkzsF+NJlD4RRQFJScHjtCAIsNu7d8xOFPF4b4YIck6nEwBgtVoxdepUPPzww5g2bRpKS0u1x9TU1EAURTgcDjidzg7v64rqapeuQ3M5OWmorGzU7fkvRLW1bgBAQ30zoABNbl+Xv8fsF2NivxgT+8V4EqlPApIMnzeAyspGCEL3jtmJQs9+EUWhw+JT3OfIud1uNDYG37iiKFizZg0KCwsxYsQIeDwebN26FQCwYsUKTJo0CQDOeR/1bupQqiAGf7A5tEpEZFyKEtxlAAjOkeNih9iLe0Wuuroa06dPhyRJkGUZgwYNwuzZsyGKIhYsWIDZs2dHbDEC4Jz3Ue8mh7YbUefI8RJdRETGJcuKtmKVc+T0EfcgV1BQgHfffbfd+0aPHo3Vq1d3+T7qvcK3H2FFjojI2NTtRwBAEHjM1kPch1aJuqL19iO8sgMRkXHJYduPiAI3BNYDgxwllJaKnDpHLs4NIiKiDimKAhGsyOmJQY4SSuuKHOfIEREZV+sNgZnjYo9BjhKKVpETBZ7dEREZnILIOXJctRp7DHKUUCJWrYrgJbqIiAxMblOR4zE71hjkKKFErFoVOEeOiMjIFEWBGLb9CI/ZsccgRwklfGiV248QERmbLAc3cAdYkdMLgxwlFHVxgyiEzu54ekdEZFitK3LMcbHHIEcJRc1tghi6sgOPCkREhtV21SqP2bHGIEcJRdt+RL2yAytyRESGpSgKBISvWo1zg3ohBjlKKK0v0SXxqEBEZEiKokBBeEWO85r1wCBHCaVlQ2B1jlycG0RERO1SI1vLHDlw1aoOGOQoobSuyPXms7tjZQ3w+qR4N4OIqFvU+XDhFTnOkYs9BjlKKBfKJbq8fgn/89Y2bNhVGu+mEBF1i5rZWq7swEt06YFBjhKKmtuCl+hCr63IeXwSJFlBU7M/3k0hIuqW8BNvABDAipweGOQoobRZtdrFY4LHF4DXb/zhSl+ojb4AJwESUWJqqcgFP4oiK3J6YJCjhCK3vtxLF5Pcknd24ffv7tajaTGlBji/n0GOiBKTOmISvv1Ibx1FiSdzvBtA1BWyokAMnX50Z7FDVb0HyclWHVoWW2pFzhswfvWQiKg96uFZFFo+MsfFHitylFAUGedVkfMF5IRYCaoNrSbAMDARUXsUqKtWwzcEZpKLNQY5SiiyokAInd4F51t07aDg9Uvw+gJ6NC2m1KFVH4dWiShBtZ4j15sXqMUTgxwlFFmOnCMndaEipygKfH4pQRY7hIIch1aJKEFpc+QEXqJLTwxylFAkRWmZb9HFVasBSYaiBLf2MDo1wHHVKhElqjZz5ND1URTqHIMcJRRFVrQ9ibo6R84bqnJxjhwRkf7U43N4Ra6X7uEeVwxylFBabz/SlbM7NRQlREXOzzlyRJTY1OOztiGwwIqcHhjkKKHIcthBQezaxFlvWJXL6BNuW4ZWjR86iYjaoy12CH0uipwjpwcGOUoobTcEjv5rw6tbRt9olxU5Ikp0SruLHZjkYo1BjhLK+WwIHL5a1egb7bIiR0SJTj0Njdx+JG7N6bXifmWH2tpaPP300zh58iSsVisGDBiAOXPmICsrC8OGDcPQoUMhhv5yL1iwAMOGDQMArF+/HgsWLIAkSRg+fDjmzZuHpKSkeL4V6gGttx/pymKH8FDk80lAcsybFzPhFTlFUbQzWiKiRKHNkevmvGaKTtwrcoIg4MEHH0RJSQlWr16NgoICLFq0SLt/xYoVKC4uRnFxsRbimpqa8Ktf/QrLli3DRx99hJSUFLzxxhvxegvUg2SlZY5clytyvpZhSq/Bt/UIX63qN3hbiYja02ZDYHBDYD3EPcg5HA6MHTtW+3zkyJEoLS0959ds2LABI0aMwMCBAwEAU6ZMwYcffqhnM8kglIiKXPBAEe0ZXkRFzuDbeoTvH8e95IgoEbU/Ry6eLeqd4j60Gk6WZfztb3/D+PHjtdvuu+8+SJKE733ve5g+fTqsVivKysqQn5+vPSY/Px9lZWXxaDL1MDlsmFGtzMmKAlMUQ4/hc+QMH+RatzXJEsfWEBF1Xcs+ctA+cmg19gwV5F544QUkJyfjxz/+MQDgs88+g9PphMvlwlNPPYWlS5fiiSeeiNnrZWenxuy5OpKTk6b7a1xITGYTrFYTcnLSkJZqBxDsR4vZ1OnXWm0tYciebDN03yhhwTQ1PQk5Ofr/rBqBkfvkQsZ+MZ5E6JOmQDC0ORzJyMlJg91ugWjyJkTbuyse780wQW7+/Pk4ceIEli1bpi1ucDqdAIDU1FTcc889+OMf/6jdvnnzZu1rS0tLtcd2RXW1q0uT5bsqJycNlZWNuj3/hcjj9UOWZFRWNqK52QcAqDjbCJul8yBXXevW/l9Z5UJlpXFXOzS5fdr/yysaYEXvP4vl74sxsV+MJ1H6pLraBQBobPCgsrIRfr8Ev19KiLZ3h579IopCh8WnuM+RA4DFixdjz549WLp0KaxWKwCgvr4eHo8HABAIBFBSUoLCwkIAwLhx47B7924cP34cQHBBxK233hqXtlPPan2JLgBRh/Hw4UqvwYdWvX4ZFnPw15N7yRFRImpzrVUBnCOng7hX5L799lu89tprGDhwIKZMmQIA6N+/Px588EHMmjULgiAgEAhg1KhRePzxxwEEK3Rz5szBQw89BFmWUVhYiJkzZ8bzbVAPkRVELHYAop9zkUhz5PwBCalJFtQ2eg2/5x0RUXsUcEPgnhD3IDdkyBAcPHiw3ftWr17d4dfddNNNuOmmm/RqFhmUHFaRE7TFDtF9rc8vw2YxweuX4DV4lcsXkJEWCnJGvwpFdzS4fUixm2ESDTEoQEQ6aLP9CCtyuuBRlBJK8BJdwf93Z2g1NbT60+gVOZ9fQora1l5WkfMHJDz72lf45y6uNCfqzeTW24+ga3t/UnQY5CihBC/RFTwomMK2H4mG1y8hyWaC2SQafrjS65e10Gn0+Xxd5WoOoNkrobLOE++mEJGO2p8jxyAXawxylFBkOWyOnNhxRe4P7+/DlgNnI27zBYJDq3arCT6fcYcrZUVBQJKRmqxWD43b1u5oavYHP3r8cW4JEempvQ2Bea3V2GOQo4QSXpFT5120rsgpioJNeyuw+2h1xO1evwRrKMgZuSKnzolL66VDq2qAa/IE4twSItITNwTuGQxylFAiL9HVfkWu2RuArCha5Ufl80uwWUywWU2GniOnhkx1jlxvW+ygBjg3K3JEvVrL0Cov0aUnBjlKKJKiaGd3YgerVl2hoOBqFeS8fhlWiwib1Wzo4Uo1ZNotJphNgqGrh92hVeSaWZEj6s1ahlaDn4sitx/RA4McJRQ5ig2B1Upc6yDnCw2tqluQGJUaMq0WE6xmk6FDZ3e4Q0Gbc+SIejf1yNUyRy767aIoegxylFAiNgTuYNWqGuDaHVo1hxY7GDjI+QNqkBNhtYiGbmt3cI4c0YVBrb5px2ywIqcHBjlKKJGX6Are1roipwU5TyDioOH1y7BaRdisxq7IqW2zWkywWkzwBXpXRU4NcM3egK7XOiai+OKGwD2DQY4SSnsbArc+MKiVOElW0OwNhiJZDm7pEazIGXyOXGhOnM2sDq0aN3R2hzusEuf2sipH1Fu1v/0Ik1ysMcgZRJ3Li037yuPdDMOL2H6kk6FVAHCFhvHCq1xGr8ipIdNiDg2t9raKXFj/cJ4cUe8lsyLXIxjkDGLDzlK8/t4+/mHrhBy2/Yipgw2Bw1dDqqFBrWrZLMGhVSPvzaa2zWoRYTX3xjlyAYSO6xHVOSLqXRS51Rw5gXPk9MAgZxA1jV4AQG3oI7VPVtB21WrrilxYGFarc95Ay0pQm8UEr0827AFFrcjZ1DlyBh4G7o4mjx+Z6Tbt/0TUO7VXkeO02NhjkDOIOlcwwNUxyJ2THLEhcMtt4VzNfiTbzMH/u1tX5IJz5GRFgWTQI4qvzWKH3lWRc3sC6OtIAsC95Ih6s9arVgVW5HTBIGcQdazIRUVRlE6vtdrU7EduVjKAsIqcv2W40mY1RdxmNOqcOKtZhM0s9qqKnKwoaPL4kRMKcry6A1HvpR6ZWzYE5hw5PTDIGURtqCKnfqT2SbICIfRTK2hDq5GPcTX70TczCQJaglz4cKU9FOSMGpDUipzFLMLSyypyHq8ERYEW5LiXHFHv1WbVKveR0wWDnAH4AzIaQ0OAHFo9N7m9ilyrA0OTx4+0JAuS7eb2V61a1CDXEpDOVLpworxR9/ZHwxeQYTWLEAQhtNjBmIGzO9QKXEaqFVazyDlyRL2Y3OoSXYIQrNIxzMUWg5wB1IdV4epcvji2xPhkuZ3FDmEluYAko9krITXJgtQkS5tVq8HtR4Lz58KHVpd/dAhvfLC/R95DZ9RLiQEILXaQes2BT63ApdiDQZsVOYqFkxWNePrVLyOOpRR/6mErfNUq0DLkSrHBIGcAangzm0TOketE5By54G3hFTl1O4uUUJBrPUfOZm6ZIxde6SqtdqO8xm2IzSp9fhlWS/DN2SwiFAABKf7tigW1ApdiNyMlLGgTnY8DJ2pRVe/B0bKGeDeFwijtVOSAtvOa6fwwyBmAOi/uotxUzpHrhCwrWoBrqci13K8Gt5SkYFBoPUfOam2ZI6eGO7cngIYmHwKSjJp6T0+8jXPyBSRYzaGKXOhjb5kn5w6ryKXYzNxHjmKirMYNACivdse5JRROPTaHX9kB4IKHWGOQMwC1CnexMx2NoUBBbSmKAgVoM0cufNhRDW4dDa3azG3nyFXUthz8y2vj/4fA5w/OkQMAS6gy11vmyalzFpPVihyDHMVAWSjAlTHIGUpHFbneMlXEKBjkDKDO5YXZJKJ/TgoUAPWcJ9cuddjzXBsCN7UKcq7QPmVq9c1iEWG3Rc6RK68JC3IG+EPgC7TMkbOZ2y7MSGTuNnPkEnNo1R+QseKTb1HTEP8KLgHl1U0AgLKapji3hMKpR+Y2c+SY42KKQc4A6hq9cKRakZkW3O0+FsOrAUnGnz7cb5iVmLGglunPtY+cVpGzW5CSZIHXL8EfkLWVoKIgtFTkQvu1lVe7IQiAzWpCRU1zT72dDoXPkVM/GnXPu65q8vhhNgmwWkSk2C0JO7R68GQt1m05hS/28PrI8dbk8aPB7YdJFFBe7e6xas/+E7U4Wso5eecit95+pIOr8dD5YZAzgNpGLzLTbHCkBoNcLLYg2X6oEhu+KcOy9/b2mmpO24pc5O1A2KrIUEUOCIY7b9hKUG2OnK+lIpeTkQRnVrJBhlbD5siF2uwP9I6h1abmAJLtFgiCgBS7GV6/1CNTCRRFiekE6wMn6wAAh07Wxuw5E0FtoxeSbKyfRbWKPuwiB5o8AW0rJz3JioLXivfgrZKDur9WImtZtRr5kTkuthjkDKDOFQpyMazIfb6zFCl2Mypq3PjHhqPn/XxGILdzAWYguElwg9sHSZbhag6emdutJi3INTX74fNLsKkrQa2RCwgqatzIy05GXlayQYZWwypyZnWOXO8I426PHyn24NB2sj3UPz1QlVu7+SSeevVL+GO0aORgKMAdPtNwwcxpbXT78OxrX2HNVyfi3ZQI6ry4UUNyQp/rP7x6rLQBDW4/TlY0otHNqTAdabMhsLb9SOIluf3Ha/DXjw4Zcn4fg1ycKYqCWpcXjlQb0pIsMJuE867IVdS4sf9ELW65+iLcOKofPtpyCofP1MeoxfGjVeS0y70E/9Po9uOXy77C6i+Ow9XsR0pSsOKTGgoMwYqcrFW3zKbgEKvXL0FWFJTXupGbmYzcrGTUNHjiHpoiVq2G2uztLRU5TwApoQCXkhTsH70v0yUrCtZvP43aRi92Hak57+fz+AI4VtaIvKxkeP0STla4YtBK49vxbRV8ARlf7a0w1B+zspommEQBlw/KDn2u/8nYzsNVAIJzwPafuLCqsl2h/pi0XezQU6+v4KW3v8G7G9sWMxRF6dLPcfEXx/HxttOG3OKGQS7Omr0B+PwyHKk2CIIAR6rtvCtyG74phSgIuO5yJ+65cRDSki14/8vjsWlwHKkVOUGMPLv7564yeH0SPt9ZioYmn1aJSwkbWg3fZFcQgnO0fH4ZdY1e+PyyVpFTAJytje88uYg5cr2sItfk8SM5FLBTeqgid/BELaobgr9Tm/dXdOs5PL4A1mw6AVezH4dP10NWFNxx7YDg85+6MP6Qbz14FkBwKsKps8YJr+XVbuRmJaNPhh1Wi9gjVfVvDldhSP8MJNvM2HPs/E8Oeqt4z5E7cqYBu45Uo+TrUxEnjJIs46W3d+HVd/dE9TzV9R4cOhWcTvHFrjJd2no+GOTiTN16RF3o4EiznVdFzh+Q8M/dZbhycDYy02ywW824cXR/7DpSHbE6MxHJ2nyLyMUO5TVuJNvMqG/yYc+xGq0Sp82R84SGVs0tP+42iwlev6R9T/Iyk5CXlaw9Xzy1vrJD8LbeUZFzewJhQ6vBj+ezKbCsKJ3O2frn7nIk2cwYd4UT3xyuQrO368Fx9RfH8c5nR/BmyUEcOFkHkyhgzNC+yM1KxqHQfLlmb8BQlapYavL4sf94La673AlRELDlwNm4ticgySitCq1UrXbDmZUMURCQl5Ws+xYkVfXNOF3ZhFFDclA4IBP7jtecV7/7/BKqDbB/pR4UudUoihB5u2rf8Rr833/sRmVdbE+i128/DYtZhNcvYcM3LQHsvX8ex+6j1dh6sFKbJnEum/YFFzUNLXBg8/6zhjuxTuggd+zYMdx7772YOHEi7r33Xhw/fjzeTeoy9aoOjlQrACAz1XZeV3f4+/ojaHT7cXNRgXbbDaP6wWwS8MnW0+fX2DjT5si1WuwAAPeOH4zMNBsCkqxV4sLnyHn9MqyhuXEAQhW5sCCXnYK+mcELuVd0c8GDoihYs+kEnvi//+z2HzpFUbQVtsF2qosdOj9wHDpVh2MGLPuHC1bkQv0T+tjdlatn65ox549bMOuNryO2AVEUBR9tPYU/vL8Px8oasO3gWYwt7IvrrnDCH5Cx49vKLr/OR1tPwZFqxdYDZ/HZjjO42JkOm9WEYQUZOHS6Hpv2lePxJRsx961t2pl7vAQkGcfLG3D4TD3OVLXMF9tztBrzl2/v1s/Izm+rIMkKbhzdD4UDHPh6f9eGVwOS3O5cQq9PQkMX55hJsoxX392D5/6wGRu+KUVlXTPysoMnYc7sFJRVN6G+yYfVXx5vsz1MszeA9744FrHatKsh7JvD1QCAkUP64LKLs1DT4O3WyZ8/IOPjrafwy9e+wlOvfolX392DqhgHmWjIsoK1m0/io62nOp3mUFHrxsmK6HdCkLWh1dYVueDt/oCMv396GL9ZsRPbD1XilVV7YjaPtb7Jhy0HzuL6kfkYWuDA+u2nIcsK9p+oxftfHsd3LsuFI9WKVRuOnvNnQFEUbNpbgcH9MjD5uwPR7A1gexePIXozx7sB52P27NmYOnUqJk+ejOLiYsyaNQtvvvlmvJulURQFXr+ExiYfKuuaUVnvQWVdM5q9AaSnWJGdbkd16ECjVeRSbdh5uAovv/0NjpQ24IZR+bjucie2HazEgZN1KByQicIBmdh/ohbfnq7D4P4ZGD00B30dSdjxbRU+2X4at1xVgEsHZGrtyEixYmxhLv65uwx3XjcQSVYTFCU4T0GBov1fVhTsPlqNTXsrYDYJGDMsB4P6ZSDFbsHBk7VYv/0MFEXBpLEX4fJLsiEIQnCOX6MXdS4fLGYRVosIq9kU/H/owu/B4WMJFrMIURTg9UnwSzLSkq1IsZu151EQPNAePFmH4+WNyO+TjBxHEj7fWYot+89iYF4agLZz5JJtZoy9LBc1jV4U//OYFuSsFhOsZhH7jteittGDrLQM7Xtis5hQXe+BENqOxJFqDQ1tW1Fe7Uaj2we3JxB6T8HnEUUBPr+MY+UNWL/tNCpqm/Hdy/Nw7QgnFEXBh5uCB8O0ZAtefXcPDozqhysHZyMrzQ4guCgj+E+GLCuQlWCAz063QxQFSLICT2glrbpFihroztY1Y9/xGiTbzejrSEJZtRt7j9cgLdmKKy7JxsfbTqHk61MAgJGD++Dmov4Y1C8DVosJHl8Abk8AsqxACq3elOT2P5pNIrIz7DCbBBwpbUBtgwf5fVKQlW5HZV0zahu9sFlMsNtMSLKakWQzIyPVimRbsB8Dkoyztc2oqm9G38xk9M1MgiwrqHN54fVJaPZKbSpy5TVuVNS6Ud3kx5GTNbBZTMjOsENRgAa3DzaLCX0y7DhwshafbDsNj1fCxc50LZDJioJfL9+Oh+4aDrNJxHtfHMOOb6tgEgV8Gdoe5LtXOHGxMx3Z6TZ8sbschQOyQkE/2A+yHPxdPVPZhKr6ZqQlW5GRakVmqg1vf3oYoijgv+4bg6Wr9uBEeSOGXeQAAAwryMSGb8rw+nv7MDAvDTUNHvx6+XaMGZqDf7lhEHIyk1Dv8qHJ44fHJ8EkCkiymSEKoZ8HSUFAlmGzmJCRYoPNKkKSgr+Hn2w7DVezHyMuzsagfuna3oipSRZYLSZIsoKmZj9Kq5vQ0BT8Pp2ta8anO85E7EX5neG5KLwoE2+WHIQkK5j/1+D36tKLMkO/TwIEIfhHVpaV0LGqGRkpNuRlJcMfkLHlwFlkp9sxMC8NVxXm4k8fHsDhM/UY0t8Bf0DGyYpGeEO/4xazCIsp+NEkivj6QAU+3HQSAUnGqCE5GHFJFvo6knDoVB3WbDoBtyeAywZmYthFmfAFJJhEEQV9UxGQZHyxuxxnqly4qG8aLslPxyX56di8rwI7vq1CjsOOP394AAqgVdOdWcn4el8F/vuPX6PO5cPazScxZfxgjB6Wg6ZmP5as3I3Sqia8u/EYrhiUDZ9fwpHSBgwrcGDyuIuRk5Gk7euZmmSBySRo87kkWcGhU3VYv/008rKCUzHUY9HuozXaiWOy3QxFCc7d9fklmEwCTKIY2nYneCw5dKoOf157EOU1bgwrcODa4Xn4ZNtp7Pi2CpPGFuC27wyAzWJCfZMPh0/X42hZAypq3Ghs9uPa4XmYfOMQHDhRix3fVkGSZVjNJhT0TcWQ/hnITA9O0ymrasLRsgZkptpwcX46zKIItzeA8uomlFa74Ui1YWBeGpZ/dEib87fy8yO4/JJsDOnvgMUsoqyqCTarCZdfko0DJ2rx/lfHIckKvn/dxbj9moEQQj/HoijA7Qng1FkXXM1+5DjsCAQU7D0WDL2t58g1ewPw+AJ47b29OFnhwg2j+uHSixxYVrwXf/zwAPr1ScGBk3WQJBlmk4jB/TNw5aA+yMtK1harhQtIMnx+CQE5+PdMloPzYiVZwfjR/XGm0oWlq/Zg0YodOHSqHn2zkjFt0jB8sbscyz86hH9sOIrdR6vh8Um4YWQ/FA7IRFV9M/ySDLMo4kxVE+67ZSiGDchEdrodn+0oxfCBWbBaTNjxbSUaXD7ccvVFbdrVUwQlQccCqqurMXHiRGzevBkmkwmSJGHs2LFYt24dsrKyonwOl27XfFu7+STe+ewwWj+9SQwGB3er4Z1l/3k9rBYT1m05hRWffIvUJAsudqZjaXIoqAAAF7VJREFU99Fq7TF9MuyoCivBZ6a1VO/MJhGAgv45qfiv+8aEPm9xorwR//2nLVG1vU+GHZKstKkMZqfbASiobvAiNcmCFLsZzd4AGs5jub+Azi+gbDGLGDm4D/Ydr0GTJ4D/uPMyXDM8D16/hEcWb8D40f0w9eahqGnw4KlXv8Rt3xmAu68fBABY8Nft2lYRE8b0x7/dPBQ5OWl4/vUvse1gMAhclJuK5x+4us3jzyU1yYLcrCQcORNZ3bipqD/uuWEQ3v70CD7e1v0K6H23DMWNo/tDkmU8tPDzqOaU3Di6HzJTbVi7+STc3kCHP2t6MIktK4jDWc0i/AE5oo/vmzgMN47qB0mW8fBvPu/SdWRzs5KR47Dj6JkG9M1Mws++PwJNzX4s/n87tbl2JlHAPTcOxncuy8WaTSfQ1OzHT24vhCAIWPn5EXzQjVWXk6+7GJOvuxinK1343cpdePj7IzAwLx21jV4889pXuGJQNn5652WQZWDdlpNYs+kk/AEZoih0e1Vrnww7+mTY8e3p+jbf13MZcXEWvnu5Eyl2Mw6drseHm05AkhUM7p+BB269FK+/tw8nulBVUd1yVQGmTBgCV7MfM175Aj6/jMw0G1zN/k63xxl+cRYy02zYfrAy4ufx8kuyMSAvFV/tqUB1gyf4h15pOSZkptkwuF8GTp51oSKs6nXntQNxy9UFmPvmNpTXuDFz2hgMys/AlgNn8eq7e5DjsGPKhCEo2XwSh04HF3oJQvCk799vvwwnKxrx8bbTyEqzYaAzDdsPVWl7UHYmPdmCqTcPxdWFuQCAp1/9MuK43BmzSURAktEnw477Jg7D5ZcEF2nUNHjwzmdHsGlfBSzmYKBXf+/NJgF9M4PB8XRlk/Yc1tCJprpfpsoUOjGMhigImHrzEAzKz8CnO85g77EarcBgs5jgD8haO64u7AtBELB5XwVS7GZ4fNI5X8diFvGdy3Jx/62XQhAEfL2/AsuK92qvm2w344HbLtVWHIf/fvbPSUWyzQSPT8Kpsy7tZ0ItEqi/Wz6/3GEbhl+chf+8dyQkWcbM1zejzuXFDaP64dbvDEBGihX+gIxnX/8KNQ1eOLOTkZZk0X5ewplEAYsf/S7Skq34cNMJvP3ZEQDB45svIKOgbypm338VcnPTUVmpz96toiggOzu13fsSNsjt2bMHv/zlL/HBBx9ot912221YuHAhhg8fHseWBR05XYev9pQhyWpGWoo1NJk+BdmOJJhEAf6AjLIqFw6eqIXJJGJ8aCjU7fFjx6FKjBnWF3abGUfP1GPbgQpcPTwPA/LSUVrpwr5jNbjskizk90lFeXUTth88i4pqN5q9AfzL+CHoGzo7be2zbadwtrZZO/sWEPooqGdKAgb1y8Dw0IHl21O1OFPpgsvtR25WMoouy4MsK/h8+2kcOFEDtycAq0XE4P4O9M1Kht8vw+uX4Av9U1eFptotsFlN8AWCwyvJNjNMJhENTT5t6X6wKiDAYhZRODALgwscOFXRiJPljRg1LAeZaXY0ewPYcfAsRl/aF3ZrsJqz+3AVBhc4kBS6WsPeo9Xol5OqbeWiKEowbDb50MeRpAVcSVZw7Ew9dh2uwqD+GbgydCDZfuAsNu0tQ36fVKSnWOEPBN+H1xd8L8FqURLGDs+D1WLCkdN12HW4CnarCblZKRg1LEcbPqh3eVFWFazwCIIAkygEV8yKAsyhM/3qeg8q69yAAphMLWft14/qr1UWt+6vgMvtQ3ZGElzNPpRWNiEnMwmjhvVFTb0HX+8rx8X5GSgK/WFxe/zYe7Qa+47VoNkbQB9HEtKSLTCJYqg6IEAUgx9Notjyf1Ow4lhR44bXJ2HoRcF+PVneiOr6ZuRlpyAnMwk+vwy3xw+3JwBXsx91jR7Uu3wQhOAfKWefFPTNTEZppQvHyxuQarcg25GEJKsZNqsJI4fmaEPG+4/VoLymCYqCYJU6ww6vX0JlTTNEUUB6qhUebwDl1cEtYsZcmgtRDFZw1e8zAJytcWPfsWrYrCYU5Kahf9+0dn8HvH4J33xbiep6DxqavBAFQfvZs1lE9M9NQ152ChrdPtQ0eFBT74HHJ+G2awdqbW6t0R1cYBPentoGD97beBSSrCA3KxkZqVYk2cyQJAVNHj8UJVj9NJlEmEUBzT4JtaEV06IooCA3DVddlgeTKMDt8aO0qgmNod+XRrcfXp8Esyn4h7B/3zRkZdi1uZV9MyN//0+UNWDT3jJMHjcIdlvw5Ouz7afh9QUgyy2XvlP/FORmqSu4vSircmlViQlXXaT9Xp0sb8C2A2dxtLQejlQbCgdmISPVBp+2AbcEnz9YJbk4Px3DBgRPrtXjXnm1G440G4aGqoKKosDrk2CzBkPJyfJG+PwSCi/O1k4SXG4fDp2qQ0CScVVhLgRBQHl1Ez76+iSm3jIMJlPwpOHjLScx7sp8pCZbIcsKth88i5PljWho8mLSNQORl53Spg/dHj8+Cw29ZabZEZBkNLp9kGQFoUMjBAgY4EzD8Ev6aG0CgG++rcS+o9VISQ5N5Qid2DrSbLDbzJAkGX5JQSAgw+MLwOX2IzXZgjuvu0S7yky4A8drsHHnGdisJjjSbBh2USYu6ReskCmKgp2HKvHFrlKMGNQH11zuhC1UnT1R1oADJ2pQ7/LB4w2gIDcNwwZk4v+3d+9BUZX/H8Df7MIu4oWbASsklo2EOcm2XFLBC6gYAmpqOiU1QjoMKKlDM2jmjWoiDVPEyELt4nQRDRUd09C0lFAEIRxv4CXEBRQQQZTL7vP7w58nKeELcllX368ZZ9jznOeczzkf99kPz9nDqbh5BwVFNyAAdDc3hYNtd/R16ImS8rvvmYHP2Epj/j3lVbeh0ws8ZdUNt+40IvfcNfTsboYXn3sKQtz9DMgruA7LHkqYK+XQ6wSUCjn69bGEVQ8lSspvoaFRD3dXe2kcA+7OlOWev4aCKzdQXduAiSP6w6aXudSu0wtknynFs46WsLXs9s/7qfoOcs/de9/Wo75RB51OwMxUBqVCfvefmRym8rtXgWQyE8hMgJdc7PHU/39lpqqmDjKZCXpaKJoc6yXtTZRX3YZ6gB1kMhNcKK5C8bUaqGy7w9RUhqKSanS3MMNLLnbS/9UzlyrxV+F1VN68gyEvqjDo2d7SFSJDeKILuc6ckQOAp57q2WnVOT085uXRxLw8mpiXRw9z8mjqzLy0NCNntDc7qFQqlJaWQqe7+50inU6HsrIyqFQqA0dGRERE1DWMtpCztbWFq6sr0tLSAABpaWlwdXVt9ffjiIiIiIydUd+1umzZMsTExGD9+vXo1asX4uLiDB0SERERUZcx6kKuf//+2Lp1q6HDICIiIjIIo720SkRERPSkYyFHREREZKRYyBEREREZKRZyREREREaKhRwRERGRkTLqu1bbqyseqWHIx3ZQ85iXRxPz8mhiXh49zMmjqbPy0tJ2jfYRXURERERPOl5aJSIiIjJSLOSIiIiIjBQLOSIiIiIjxUKOiIiIyEixkCMiIiIyUizkiIiIiIwUCzkiIiIiI8VCjoiIiMhIsZAjIiIiMlIs5DrBxYsXMW3aNPj7+2PatGm4dOmSoUN6bFRWVmLWrFnw9/dHUFAQ5syZg4qKCgDAyZMnERwcDH9/f4SGhqK8vFzq1xlt9GDr1q2Di4sLzp07B4B5MbS6ujosXboUY8eORVBQEN5//30ALY9TndFGTR08eBATJ07EhAkTEBwcjH379gFgXrpaXFwcfH19m4xZQNfnoV05EtThQkJCRGpqqhBCiNTUVBESEmLgiB4flZWV4s8//5Ref/zxx2LhwoVCp9OJ0aNHi+PHjwshhEhMTBQxMTFCCNEpbfRg+fn5IiwsTIwaNUqcPXuWeXkExMbGig8//FDo9XohhBDXrl0TQrQ8TnVGG/1Dr9cLd3d3cfbsWSGEEKdPnxZubm5Cp9MxL13s+PHj4urVq9KYdU9X56E9OWIh18GuX78uNBqNaGxsFEII0djYKDQajSgvLzdwZI+nvXv3irfeekvk5uaK8ePHS8vLy8uFm5ubEEJ0Shv9V11dnXjttddEUVGRNCgyL4ZVU1MjNBqNqKmpabK8pXGqM9qoKb1eLzw9PUVWVpYQQohjx46JsWPHMi8GdH8h19V5aG+OTNs+EUkt0Wq1sLe3h1wuBwDI5XLY2dlBq9XCxsbGwNE9XvR6Pb7//nv4+vpCq9WiT58+UpuNjQ30ej1u3LjRKW1WVlZdc5BGZM2aNQgODoaTk5O0jHkxrKKiIlhZWWHdunXIzMxE9+7d8c4778Dc3LzZcUoI0eFtHPuaMjExwWeffYaIiAhYWFjg1q1b2LBhQ4ufH8xL1+nqPLQ3R/yOHBmt2NhYWFhYYMaMGYYO5YmXk5OD/Px8vP7664YOhe6j0+lQVFSEgQMHYvv27YiOjsbcuXNRW1tr6NCeaI2Njfjiiy+wfv16HDx4EJ9//jnmzZvHvNBD4YxcB1OpVCgtLYVOp4NcLodOp0NZWRlUKpWhQ3usxMXF4fLly0hKSoJMJoNKpcLVq1el9oqKCshkMlhZWXVKGzV1/PhxFBYWws/PDwBQUlKCsLAwhISEMC8GpFKpYGpqisDAQADA4MGDYW1tDXNz82bHKSFEh7dRU6dPn0ZZWRk0Gg0AQKPRoFu3blAqlczLI6Clz/HOyEN7c8QZuQ5ma2sLV1dXpKWlAQDS0tLg6urKKewOFB8fj/z8fCQmJkKhUAAABg0ahDt37iArKwsA8MMPP2DcuHGd1kZNzZ49G3/88QcOHDiAAwcOwMHBAcnJyXj77beZFwOysbGBl5cXjhw5AuDunXHl5eXo169fs+NUS2PYw7ZRUw4ODigpKcGFCxcAAIWFhSgvL4ezszPz8gjojHPdmTkyEUKIjj4JT7rCwkLExMTg5s2b6NWrF+Li4vDss88aOqzHwvnz5xEYGIh+/frB3NwcAODk5ITExERkZ2dj6dKlqKurg6OjI1auXInevXsDQKe0UfN8fX2RlJSEAQMGMC8GVlRUhEWLFuHGjRswNTXFvHnzMGLEiBbHqc5oo6Z27tyJL7/8EiYmJgCAqKgojB49mnnpYh988AH27duH69evw9raGlZWVti9e3eX56E9OWIhR0RERGSkeGmViIiIyEixkCMiIiIyUizkiIiIiIwUCzkiIiIiI8VCjoiIiMhIsZAjIqMSExOD1atXG2TfQggsXLgQHh4emDJlikFi6ExLlixBYmKiocMgojbgkx2IqF18fX1x+/ZtpKenw8LCAgCwdetW7Ny5E99++62Bo+tYJ06cwJEjR3Do0CHpWO+3fft2vPfee9LfOLS2toaXlxdmz56NZ555plX7iImJgb29PebPn9+hsbfGihUrWr2uIeMkon9wRo6I2k2v1+Obb74xdBhtptPp2rR+cXExHB0dH1jE3ePm5oacnBxkZWVh8+bNUCqVePXVV3Hu3Ln2hktE9B8s5Iio3cLCwrBx40bcvHnzP21XrlyBi4sLGhsbpWUhISHYunUrgLuzWNOnT8dHH30Ed3d3+Pn5ITs7G9u3b8eIESMwZMgQ/Pzzz022WVlZiZkzZ0KtVmPGjBkoLi6W2goLCzFz5kx4enrC398fe/bskdpiYmKwdOlSzJo1C25ubsjMzPxPvKWlpQgPD4enpyfGjBmDn376CcDdWcbFixfj5MmTUKvVWLt2bYvnRC6Xo2/fvli2bBk8PT2xbt06qS0qKgrDhg2DRqPBG2+8gfPnzwMAfvzxR+zatQvJyclQq9UIDw8HAGzYsAGjR4+GWq1GQEAA9u/f3+x+ExISEBUVhXnz5kGtVmPSpEk4c+ZMk/MTEhICd3d3jB8/Hunp6U3Oz73L1pmZmRg+fDg2btyIIUOGwNvbG9u2bfufcfr4+ECtVsPf3x8ZGRktniMiaj8WckTUboMGDYKnpyeSk5Mfqn9eXh5cXFyQmZmJwMBALFiwAH/99Rf279+PlStXYsWKFbh165a0/q5duxAREYHMzEw8//zziI6OBgDU1tYiNDQUgYGBOHr0KFavXo3ly5ejoKBA6puWlobw8HBkZ2dLDy2/34IFC+Dg4IDff/8da9euRXx8PDIyMjB16lQsX75cmnGLiopq9fGNGTNGejYsAAwfPhy//PILMjIyMHDgQCn+adOmISgoCGFhYcjJyUFSUhIA4Omnn8aWLVtw4sQJzJkzB++++y7Kysqa3V96ejrGjRuHY8eOITAwEBEREWhoaEBDQwPCw8MxbNgwHD16FIsXL0Z0dLT0zM9/u379Oqqrq3H48GF8+OGHWLFiBaqqqh4Y54ULF7BlyxakpKQgJycHycnJcHR0bPU5IqKHw0KOiDpEVFQUvvvuO1RUVLS5r5OTEyZPngy5XI6AgABotVpERkZCoVDA29sbCoUCf//9t7T+yJEj4eHhAYVCgfnz5+PkyZPQarX47bff4OjoiMmTJ8PU1BQDBw6Ev78/9u7dK/X18/ODRqOBTCaDUqlsEodWq0V2djaio6OhVCrh6uqKqVOnYseOHQ9/YgDY2dmhqqpKej1lyhT06NEDCoUCc+fOxZkzZ1BdXd1s/1deeQX29vaQyWQICAiAs7Mz8vLyml3/hRdewLhx42BmZoaZM2eivr4eubm5yM3NRW1tLWbPng2FQoEhQ4Zg1KhR2L179wO3Y2pqisjISJiZmWHEiBGwsLDAxYsXH7iuXC5HfX09CgsL0dDQACcnJ/Tt27eVZ4iIHhZvdiCiDjFgwACMHDkSGzZsQP/+/dvU19bWVvr53o0CvXv3lpYplcomM3IODg7Sz927d4elpSXKyspQXFyMvLw8uLu7S+06nQ7BwcHSa5VK1WwcZWVlsLS0RI8ePaRlffr0QX5+fpuO599KS0thaWkpxbN69Wrs3bsXFRUVkMnu/j5dWVmJnj17PrB/amoqNm3aJF1Crq2tRWVlZbP7u//8yGQy2NvbSzN4Dg4O0j7vHV9paekDt2NlZQVT038+Jrp164ba2toHruvs7IxFixYhISEBBQUF8Pb2lm6IIKLOw0KOiDpMVFQUJk2ahNDQUGnZvRsD7ty5IxVI165da9d+SkpKpJ9v3bqFqqoq2NnZQaVSwcPDA5s2bXqo7d6bOaupqZFi1Wq17S5Gfv31V6m43LVrF9LT07Fp0yY4OTmhuroaHh4eEEIAAExMTJr0LS4uxuLFi7F582ao1WrI5XJMmDChxf3df370ej1KS0thZ2cnten1eqmY02q16NevX5uP6d9xAkBQUBCCgoJQU1ODJUuWYNWqVVi5cmWbt01ErcdLq0TUYZydnREQENDkz47Y2NjA3t4eO3bsgE6nQ0pKCoqKitq1n0OHDiErKwv19fVYs2YNBg8eDJVKhZEjR+LSpUtITU2VvhOWl5eHwsLCVm1XpVJBrVYjPj4edXV1OHPmDFJSUprM6LWWTqdDUVERYmNjcezYMURGRgK4W3gqFApYW1vj9u3biI+Pb9LP1tYWV65ckV7fvn0bJiYmsLGxAQBs27ZNujmiOadOncK+ffvQ2NiIr7/+GgqFAoMHD8aLL74Ic3NzfPXVV2hoaEBmZiYOHDiAgICANh/fv+O8cOECMjIyUF9fD4VCAaVS2WTmj4g6B99lRNShIiMj/3P5LTY2FsnJyfDy8kJBQQHUanW79hEYGIjExER4eXnh1KlT0qxPjx49kJycjD179sDHxwfe3t5YtWoV6uvrW73t+Ph4FBcXw8fHB3PmzMHcuXMxdOjQVve/d1erRqPBm2++iZqaGqSkpMDFxQUAMHHiRPTp0wc+Pj4YP3483NzcmvSfMmUKCgoK4O7ujoiICDz33HMIDQ3F9OnTMXToUJw7dw4vvfRSizH4+flhz5498PDwwI4dO5CQkAAzMzMoFAokJSXh8OHDePnll7F8+XJ88sknbb4U/qA46+vr8emnn8LLywve3t6oqKjAggUL2rxdImobE3FvPp+IiIxeQkICLl++jFWrVhk6FCLqApyRIyIiIjJSLOSIiIiIjBQvrRIREREZKc7IERERERkpFnJERERERoqFHBEREZGRYiFHREREZKRYyBEREREZKRZyREREREbq/wBPvbV1oBxeQwAAAABJRU5ErkJggg==)![download (1).png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAnIAAAFSCAYAAAB2ajI+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nOzdeXhU9d3+8feZyb4vZGURZE1BdqUbtXUpqKjdtDy0pdrap+rjUlu1WhUs1lrE2kp/uLd1rdYVRC2oRRERkR3CHnay73smyZzz+2PIkIQAA85kkjP367q4SOZkZr5zTjK58/luhmVZFiIiIiLS6ziC3QAREREROT0KciIiIiK9lIKciIiISC+lICciIiLSSynIiYiIiPRSCnIiIiIivZSCnIj0KLNmzWLBggXBboZXQUEB48aNw+12B/y5fvKTn/Dqq68G/HlExD7Cgt0AEQkd5513HmVlZTidTpxOJ0OGDOHyyy/nhz/8IQ6H5+/KOXPmBLmVHWVnZ7Nhw4ZgN+O0ffbZZyxYsIBt27aRmJjIsmXLOhw/fPgwd955J5s3byYrK4tZs2bx1a9+1Xv8mWee4amnnqKxsZEpU6bw+9//noiIiO5+GSJyHKrIiUi3evzxx9mwYQMffvghv/jFL3jqqae46667gt0s24qJieH73/8+t99+e5fHf/Ob3/ClL32J1atXc8stt3DTTTdRUVEBwIoVK3jyySd55pln+PDDDzl8+DDz58/vzuaLyEkoyIlIUMTHx3P++efz17/+lTfffJNdu3YBcMcdd/CXv/wFgNWrV/ONb3yDp556iq985St8/etf54MPPmD58uVMmTKFc845h8cff9z7mKZp8uSTT3LBBRcwadIkbr75ZqqqqgBP5Wn48OG8+eabfPOb32TSpEk89thj3vtu3ryZ733ve4wfP56vfvWrPPDAAx3u19raCkBxcTHXXnst55xzDhdeeCGvvPKK9zH+9re/cfPNN3P77bczbtw4LrnkErZs2XLcc7By5UqmTp3KhAkTmDNnDoHYaGf06NF85zvfoX///scc27dvH1u3buXGG28kKiqKKVOmMGzYMJYuXQrAwoUL+cEPfsDQoUNJTEzk+uuv58033/R7G0Xk9CnIiUhQjR49mszMTNauXdvl8bKyMlwuFx9//DE33XQTd999N2+99Ravv/46L774Io8++iiHDh0C4Pnnn+eDDz7ghRdeYMWKFSQmJh7TVbtu3TqWLFnCs88+y4IFC9izZw8A999/PzNnzmT9+vW8//77XHTRRV2259e//jWZmZmsWLGC+fPn8/DDD7Nq1Srv8WXLlnHJJZewdu1azjvvPO67774uH6eiooIbbriBX/3qV3z22WcMGDCA9evXH/c8LV68mIkTJx73X0FBwfFP8nHk5eXRv39/4uLivLeNGDGCvLw8AHbv3s2IESO8x4YPH05ZWRmVlZWn/FwiEhgKciISdOnp6VRXV3d5LCwsjOuuu47w8HAuvvhiKisrmTlzJnFxcQwdOpQhQ4awc+dOAF5++WVuueUWMjMziYiI4IYbbmDp0qXeahrADTfcQFRUFCNGjGDEiBHs2LHD+zwHDx6koqKC2NhYxo4de0xbCgsLWb9+PbfeeiuRkZHk5ORwxRVXsGjRIu/XTJgwgXPPPRen08nll1/uffzOPv74Y4YOHcrUqVMJDw/npz/9KX369DnuObr00ktZu3btcf9lZ2ef/ER3Ul9fT3x8fIfb4uPjqa+vB6ChoaFDyGv72rbjIhJ8muwgIkFXXFxMYmJil8eSkpJwOp0AREVFAZCamuo9HhkZ6Q0WBQUF/N///Z934gSAw+GgvLzc+3n7sBQdHU1DQwPgqcjNnz+fiy66iH79+nHDDTfwrW99q0NbSkpKSExM7BBusrOzyc3N7fLxo6KicLlctLa2EhYWdsxjZWZmej83DIOsrKwuz0GgxMbGUldX1+G2uro6YmNjAc/4uvbH2z5uOy4iwacgJyJBtXnzZoqLi5kwYcIXfqzMzEz++Mc/dvlYhw8fPuF9Bw4cyMMPP4xpmrz33nvcdNNNrF69usPXtFUO6+rqvGGusLCQjIyMU25rWloaRUVF3s8ty6KwsPC4X//WW28xe/bs4x5/5513TrkqN2TIEA4dOtTh9ezYsYNp06YBMHToUHbu3MnFF1/sPdanTx+Sk5NP6XlEJHDUtSoiQVFXV8eHH37Ir3/9ay677DKGDx/+hR/zf/7nf/jrX/9Kfn4+4BmH9sEHH/h030WLFlFRUYHD4SAhIQGgQ2UPICsri3HjxvHwww/jcrnYsWMHr732Gpdddtkpt/Xcc89l9+7dvPfee7S2tvLcc89RVlZ23K+/7LLL2LBhw3H/HS/EmaaJy+WipaUFy7JwuVw0NzcDMGjQIHJycliwYAEul4v333+fnTt3MmXKFAAuv/xyXnvtNfLy8qipqeGxxx7ju9/97im/VhEJHFXkRKRbXXvttTidThwOB0OGDOHqq69m+vTpfnnsmTNnYlkWP/vZzygpKSE1NZWLL76YCy644KT3XbFiBX/6059oamoiOzubv/zlL96u3PYefvhhZs+ezeTJk0lISODGG2/ssO6ar1JSUnjkkUe4//77ufPOO7n88ssZP378KT/OyaxZs4aZM2d6Px89ejTnnHMOzz//POB5PXfeeSdnn302WVlZzJ8/n5SUFAC+8Y1vcM011zBz5kyampqYMmUKN910k9/bKCKnz7ACMd9dRERERAJOXasiIiIivZSCnIiIiEgvpSAnIiIi0kspyImIiIj0UgpyIiIiIr2UgpyIiIhILxXS68hVVtZjmoFbfSU1NY7y8rqTf6F0K12XnknXpWfSdel5dE16pkBeF4fDIDm5663xQjrImaYV0CDX9hzS8+i69Ey6Lj2TrkvPo2vSMwXjunRbkLv++us5fPgwDoeDmJgY7rnnHnJycti3bx933HEHVVVVJCUlMXfuXAYOHAhw2sdEREREQkG3jZGbO3cub731FgsXLuRnP/sZv/vd7wCYPXs2M2bMYOnSpcyYMYNZs2Z573O6x0RERERCQbcFufj4eO/HdXV1GIZBeXk527ZtY9q0aQBMmzaNbdu2UVFRcdrHREREREJFt46Ru+uuu1i5ciWWZfH0009TWFhIRkYGTqcTAKfTSXp6OoWFhViWdVrH2jZ79kVqapz/X2QnaWnxJ/8i6Xa6Lj2TrkvPpOvS8+ia9EzBuC7dGuTuv/9+ABYuXMiDDz7IzTff3J1Pf4zy8rqADkxMS4untLQ2YI8vp0fXpWfSdemZdF16Hl2TnimQ18XhMI5bfArKOnLf+c53WL16NZmZmRQXF+N2uwFwu92UlJSQlZVFVlbWaR0TERERCRXdEuTq6+spLCz0fr5s2TISExNJTU0lJyeHt99+G4C3336bnJwcUlJSTvuYiIiISKgwLMsK+KInZWVlXH/99TQ2NuJwOEhMTOS3v/0tI0eOZM+ePdxxxx3U1NSQkJDA3LlzOfPMMwFO+5iv1LUamnRdeiZdl55J16Xn0TXpmYLVtdotQa6nUpALTbouPZOuS8/Um69LXWML81/bzP9e9iX6JEYHuzl+05uviZ2F1Bg5ERGRQCuubCAvv5rDpfXBbopIwCjIiYiILbX1N4Vwx5OEAAU5ERGxpbYApxwndqYgJyIitqSKnIQCBTkREbElVeQkFCjIiYiILbUtSmAqyYmNKciJiIgttQU4BTmxMwU5ERGxJXWtSihQkBMREVvSZAcJBQpyIiJiS6rISShQkBMREVvSZAcJBQpyIiJiS6rISShQkBMREVvSGDkJBQpyIiJiS6rISShQkBMREVvSGDkJBQpyIiJiS6rISShQkBMREVvSzg4SChTkRETElo5OdghuO0QCSUFORERs6WjXqpKc2JeCnIiI2JIqchIKFORERMSWTFXkJAQoyImIiC1ZWn5EQoCCnIiI2JKWH5FQoCAnIiK2pAWBJRQoyImIiC2pIiehQEFORERs6eisVSU5sS8FORERsaWjOzsEuSEiAaQgJyIitqSKnIQCBTkREbEljZGTUKAgJyIitqSKnIQCBTkREbElVeQkFCjIiYiILR2d7KAkJ/alICciIrakBYElFIR1x5NUVlZy++23c/DgQSIiIjjjjDOYM2cOKSkpDB8+nGHDhuFweDLlgw8+yPDhwwFYtmwZDz74IG63m5EjR/LAAw8QHR190mMiIiLqWpVQ0C0VOcMwuOaaa1i6dCmLFy+mf//+PPTQQ97jL7/8MosWLWLRokXeEFdfX88999zD448/zvvvv09sbCx///vfT3pMREQENNlBQkO3BLmkpCQmTZrk/Xzs2LEUFBSc8D4ff/wxo0aNYuDAgQBMnz6d//znPyc9JiIiAqrISWjolq7V9kzT5KWXXuK8887z3vaTn/wEt9vNN77xDW688UYiIiIoLCwkOzvb+zXZ2dkUFhYCnPDYqUhNjfsCr8Q3aWnxAX8OOXW6Lj2TrkvP1FuvS3R0BACRUeG99jUcj91ej10E47p0e5C77777iImJ4cc//jEAH330EVlZWdTV1XHbbbexYMECbrnllm5pS3l5HWYA925JS4untLQ2YI8vp0fXpWfSdemZevN1qat3AdDQ0NxrX0NXevM1sbNAXheHwzhu8albZ63OnTuXAwcO8Ne//tU7uSErKwuAuLg4rrjiCtavX++9vX33a0FBgfdrT3RMREQENEZOQkO3BbmHH36Y3NxcFixYQESEp9xdXV1NU1MTAK2trSxdupScnBwAJk+ezJYtW9i/fz/gmRBx0UUXnfSYiIgIaIychIZu6VrdvXs3TzzxBAMHDmT69OkA9OvXj2uuuYZZs2ZhGAatra2MGzeOm2++GfBU6ObMmcMvf/lLTNMkJyeHu+6666THREREQBU5CQ3dEuSGDh3Kzp07uzy2ePHi497vggsu4IILLjjlYyIiItrZQUKBdnYQERFbMtW1KiFAQU5ERGzJ0hZdEgIU5ERExJY02UFCgYKciIjYkiY7SChQkBMREVtSRU5CgYKciIjYkqkxchICFORERMSWVJGTUKAgJyIitqRZqxIKFORERMSWtI6chAIFORERsaWjXatKcmJfCnIiImJLWn5EQoGCnIiI2NLRvVaD3BCRAFKQExERW1JFTkKBgpyIiNiSlh+RUKAgJyIitqSKnIQCBTkREbEl7xi5ILdDJJAU5ERExJa8FTnNdhAbU5ATERFb0qxVCQUKciIiYksaIyehQEFORERsSbNWJRQoyImIiC1piy4JBQpyIiJiS21j4zRGTuxMQU5ERGzJW5FDSU7sS0FORERs6ehkh+C2QySQFORERMSWNEZOQoGCnIiI2JJ3jJwGyYmNKciJiIgtafkRCQUKciIiYktHZ60qyYl9KciJiIgtqSInoUBBTkREbMk7a1XLj4iNKciJiIgtqSInoUBBTkREbElj5CQUKMiJiIgttXWpKseJnXVLkKusrOQXv/gFU6ZM4dJLL+WGG26goqICgI0bN3LZZZcxZcoUfvazn1FeXu693+keExERObqzg5Kc2Fe3BDnDMLjmmmtYunQpixcvpn///jz00EOYpsltt93GrFmzWLp0KRMnTuShhx4COO1jIiIicHQhYC0ILHbWLUEuKSmJSZMmeT8fO3YsBQUF5ObmEhkZycSJEwGYPn06S5YsATjtYyIiIqDJDhIawrr7CU3T5KWXXuK8886jsLCQ7Oxs77GUlBRM06Sqquq0jyUlJfncltTUOP+8qBNIS4sP+HPIqdN16Zl0XXqm3npdHI4jtQqj976G47Hb67GLYFyXbg9y9913HzExMfz4xz/m/fff7+6n76C8vC6gJfe0tHhKS2sD9vhyenRdeiZdl56pN1+XllY3AG7T6rWvoSu9+ZrYWSCvi8NhHLf41K1Bbu7cuRw4cIDHH38ch8NBVlYWBQUF3uMVFRU4HA6SkpJO+5iIiAhosoOEhm5bfuThhx8mNzeXBQsWEBERAcCoUaNoampi7dq1ALz88stMnTr1Cx0TEREBLT8ioaFbKnK7d+/miSeeYODAgUyfPh2Afv36sWDBAh588EFmz56Ny+Wib9++zJs3D/CMbTidYyIiIqCKnISGbglyQ4cOZefOnV0eGz9+PIsXL/brMREREVOzViUEaGcHERGxJUtbdEkIUJATERFbaluVwLLUvSr2pSAnIiK21D68KcaJXfkc5P7whz90efv999/vt8aIiIj4S/sinCpyYlc+B7k33nijy9vfeustvzVGRETEX6x2dTjlOLGrk85afe211wBwu93ej9scOnRIi/CKiEiPZKoiJyHgpEFu0aJFALS0tHg/BjAMgz59+jB37tzAtU5ExOZy95YTHRnG4L6JwW6K7bQPbwHcjVEkqE4a5J5//nkA/vKXv3DLLbcEvEEiIqHktY/2kJoYxY3fHx3sptiOZYFhaNaq2JvPCwK3hbjy8nIaGho6HOvfv79/WyUiEiLcpoVb5aKAsCwLp8Og1W1pjJzYls9BbsWKFfzud7+jtLS0w+2GYbB9+3a/N0xEJBQoyAWOZYHTaYDbUkVObMvnIPf73/+e66+/nu9+97tERUUFsk0iIiHDtCzvwrXiX6ZpERHuWZxBp1jsyucgV1NTw/Tp0zEMI5DtEREJKaapalEgWJZn8RGnwwG4tU2X2JbP68h9//vf5/XXXw9kW0REQo5pqWs1ENrOqNPhKT4ox4ld+VyR27RpE88//zxPPfUUffr06XDsxRdf9HvDRERCgWlaqhYFQFuV0+ENcjrHYk8+B7krrriCK664IpBtEREJOaZpYZrBboX9tOU2VeTE7nwOct/97ncD2Q4RkZBkWmiyQwCoIiehwucxcpZl8corrzBz5kwuvfRSANasWcO7774bsMaJiNidulYDw+xUkdM5FrvyOcg98sgjvPbaa/zwhz+ksLAQgMzMTJ5++umANU5ExO7cWn4kII6tyAWzNSKB43OQe/PNN3n88ce55JJLvEuQ9OvXj0OHDgWscSIidmepIhcQ3jFyhipyYm8+Bzm3201sbCyAN8jV19cTExMTmJaJiIQAt6mKXCCYqshJiPA5yJ177rk88MADNDc3A56y9SOPPMK3vvWtgDVORMTutI5cYHgrck5NdhB78znI3XnnnZSWljJhwgRqa2sZN24cBQUF3HrrrYFsn4iIbVmWZzN3hQz/a6vIeXZ20BZdYl8+Lz8SFxfHggULKCsro6CggKysLNLS0gLZNhERW2sLGwoZ/nfsOnI6yWJPPlfk2kRFRZGRkYFpmhQXF1NcXByIdomI2F7bQsDqWvU/zVqVUOFzRe7TTz/lnnvuoaCgoMNfNoZhsH379oA0TkTEztomOWiyg/+pIiehwucgd9ddd3H99ddz8cUXExUVFcg2iYiEhLauVYUM//NW5AxV5MTefA5yLpeL733vezidzkC2R0QkZLQFOXWt+t/RyQ5aR07szecxcldddRVPP/20/nIUEfGTtgCnkOF/bafUoSAnNudzRe7b3/42P//5z3niiSdITk7ucOy///2v3xsmImJ3lneMXJAbYkOdK3LKcWJXPge5m266iYkTJzJ16lSNkRMR8YO2HlVNdvA/TXaQUOFzkDt8+DALFy7E4TjlFUtERKQL7iOlONOysCzLu/2hfHFtwe3ozg7BbI1I4Picys4//3w+++yzQLZFRCSktC/EKWj4l+kdI+f5NaeKnNiVzxW55uZmrrvuOiZOnEhqamqHYw8++KDfGyYiYnftu1RNy8KBKnL+4q3IGW2THYLZGpHA8TnIDR06lKFDh572E82dO5elS5eSn5/P4sWLGTZsGADnnXceERERREZGAnDrrbcyefJkADZu3MisWbNwuVz07duXefPmeUPkiY6JiPQGHYKcaYFWd/KbzrNWVZETu/I5yN1www1f6InOP/98Zs6cyY9+9KNjjs2fP98b7NqYpsltt93GAw88wMSJE3n00Ud56KGHeOCBB054TESkt2i/JIbWkvMvS7NWJUSc0syFlStX8rvf/Y5rr70WgC1btrBq1Sqf7jtx4kSysrJ8fq7c3FwiIyOZOHEiANOnT2fJkiUnPSYi0lu0r8ipYuRfqshJqPA5yD3//PPce++9DBw4kDVr1gAQFRXFI4888oUbceutt3LppZdy7733UlNTA0BhYSHZ2dner0lJScE0Taqqqk54TESkt2hfkVNBzr/azq0WBBa787lr9dlnn+WZZ56hX79+PPXUUwCceeaZ7Nu37ws14MUXXyQrK4vm5mbuv/9+5syZw0MPPfSFHtNXqalxAX+OtLT4gD+HnDpdl54p1K5LRUOL9+Ok5BiS43vmGp298bq0nduEOM/46/iE6F75Oo7HTq/FToJxXXwOcvX19d6u0ba1jlpbWwkPD/9CDWh7zIiICGbMmMF1113nvb2goMD7dRUVFTgcDpKSkk547FSUl9cFdCHOtLR4SktrA/b4cnp0XXqmULwuFRUN3o9LS+tobWo5wVcHR2+9LhWVnnPbdOScVlU19MrX0ZXeek3sLpDXxeEwjlt88rlr9eyzz+bJJ5/scNtzzz3HpEmTTrthDQ0N1NZ6XrRlWbz77rvk5OQAMGrUKJqamli7di0AL7/8MlOnTj3pMRGR3qJ9d5/GcPmXFgSWUOFzRe7uu+/m2muv5dVXX6W+vp4pU6YQGxvLE0884dP9//CHP/Dee+9RVlbG1VdfTVJSEo8//jg33ngjbrcb0zQZPHgws2fPBjyLOD744IPMnj27wxIjJzsmItJbtJ+pqlmr/uXdosvQZAexN5+DXHp6Oq+//jqbN2+moKCArKwsRo8e7fOWXXfffTd33333MbcvXLjwuPcZP348ixcvPuVjIiK9QcfJDgoa/mR1muyg0yt25XOQe+aZZ5g2bRpjxoxhzJgxgWyTiEhIOGZBYPGbttPp1KxVsTmfx8h9/vnnnH/++Vx11VW8/vrr1NXVBbJdIiK2pyAXOKrISajwOcg9+uijrFixgksuuYRFixbx9a9/nRtvvJH33nsvkO0TEbEtrSMXOFoQWELFKe3skJCQwBVXXMFzzz3Hu+++S319PTfffHOg2iYiYmuqyAWO2WmLLnWtil35PEauzdq1a3nnnXdYunQpSUlJ3HjjjYFol4iI7bXPbgoa/qWuVQkVPge5uXPnsmTJEgzD4KKLLuLvf/+7d803ERE5darIBU7b6Qw7srKCgrLYlc9BrrGxkXnz5nk3qhcRkS/G1DpyAaOKnIQKn4PcvffeC0BBQQHFxcVkZGR02LheREROjXZ2CByr0/IjOr9iVz4HudLSUm655RY2btxIUlISVVVVjBkzhocffpiMjIxAtlFExJbc6loNGKvTZAflOLErn2etzp49mxEjRvD555/zySef8Pnnn5OTk+PdUktERE5N+4qcW0nDr7T8iIQKnyty69at45FHHiE8PByAmJgYbr/9diZPnhywxomI2JnVoSIXxIbY0LHLjwSzNSKB43NFLjExkT179nS4be/evSQkJPi9USIioaBD16oqRn6lipyECp8rctdccw1XXXUVP/jBD8jOzqagoIA33nhDCwKLiJymDuvIqWTkV2anWas6vWJXPge5K6+8kv79+/P222+zc+dO0tPT+fOf/8xXvvKVQLZPRMS2tI5c4LRV4NrWkVNFTuzKpyDndruZMmUK7777roKbiIifdNxrVUHDn47tWg1iY0QCyKcxck6nE6fTicvlCnR7RERChipygdO5a1UVObErn7tWZ86cya9+9St++ctfkpmZiWEY3mP9+/cPSONEROzM1GSHgGk7nWHeMXI6v2JPPge5++67D4CVK1d2uN0wDLZv3+7fVomIhIAO68ipIudX2qJLQoXPQW7Hjh2BbIeISMjpuEVXEBtiQ9qiS0KFz+vItSkuLmbz5s0UFxcHoj0iIiFDW3QFjipyEip8rsgVFBRw6623snHjRhITE6murmbs2LHMmzePvn37BrKNIiK2ZLXbzUFdq/7VdjodhsbIib35XJH77W9/y8iRI1m7di2rVq1izZo1jBo1ijvuuCOQ7RMRsS3t7BA47WetGmhBYLEvnytyW7du5R//+Id3r9XY2FhuvfVWJk2aFLDGiYjYmWlZGIan289S0vCrtlxsGJ4wpzFyYlc+V+TGjh3L5s2bO9yWm5vLuHHj/N4oEZFQYFoW4U7P27BbQcOv2oKbgeENyyJ25HNFrn///vzv//4v3/zmN8nMzKSoqIjly5czbdo0HnnkEe/Xae9VERHfmKaF0+mAVlOTHfysfUXOMFSRE/vyOcg1Nzfz7W9/G4CKigoiIiK48MILcblcFBUVBayBIiJ2ZZoW4U6DRjSGy9/az1pVRU7szOcg98ADDwSyHSIiIcc0LcLCHN6PxX+8kx2OVOQ0mUTsyucgB9DY2MiBAwdoaGjocPv48eP92igRkVBgWtbR5TEU5PzqaNeqgUMVObExn4PcwoULmTNnDuHh4URFRXlvNwyDjz76KBBtExGxNdPy7DzgdKhi5G/eyQ6GZ8KDxsiJXfkc5ObNm8ff/vY3vva1rwWyPSIiIcNtWkfGcBmqyPlZ+4qcYWidPrEvn5cfCQ8P55xzzglkW0REQop1JMg5HAoa/mZ6lx9pm7Ua3PaIBIrPQe7mm2/mT3/6ExUVFYFsj4hIyHCbnjFyToehLbr8zLSOhjgtCCx25nPX6sCBA5k/fz7/+te/vLdZloVhGGzfvj0gjRMRsbO2yQ4Ow+iw76p8cW2/n4AjXatBbpBIgPgc5G6//XYuv/xyLr744g6THXwxd+5cli5dSn5+PosXL2bYsGEA7Nu3jzvuuIOqqiqSkpKYO3cuAwcO/ELHRER6C9Nq61rVZAd/syxPgAM8QVnnV2zK567Vqqoqbr75ZoYNG8aAAQM6/DuZ888/nxdffJG+fft2uH327NnMmDGDpUuXMmPGDGbNmvWFj4mI9BamaeFweIKGulb9yzoSkgEtCCy25nOQ+973vseiRYtO60kmTpxIVlZWh9vKy8vZtm0b06ZNA2DatGls27aNioqK0z4mItKbmEfGyKki53/tK3JafkTszJB0tRsAACAASURBVOeu1c2bN/PCCy/w2GOP0adPnw7HXnzxxVN+4sLCQjIyMnA6nQA4nU7S09MpLCzEsqzTOpaSknLK7RARCZa2deQ8Y+QUNPzJ1Bg5CRE+B7krr7ySK6+8MpBt6XapqXEBf460tPiAP4ecOl2XninUrovT6SAi3EF4uIPwiLAe+/p7artOJCo6HKfDIC0tnvAwJ5GRPff8ng47vRY7CcZ1OWmQW7VqFQCZmZl+feKsrCyKi4txu904nU7cbjclJSVkZWVhWdZpHTtV5eV1AV2EMy0tntLS2oA9vpweXZeeKRSvi6u5FacRhmVaNDQ298jX31uvS0N9M1hQWlqLaZo99vyejt56TewukNfF4TCOW3w6aZC76667TnjcMAz++9//nnKjUlNTycnJ4e233+byyy/n7bffJicnx9s9errHRER6C7fZftZqsFtjL56uVc/HnnXkgtsekUA5aZBbtmzZF36SP/zhD7z33nuUlZVx9dVXk5SUxDvvvMO9997LHXfcwaOPPkpCQgJz58713ud0j4mI9BZW+8kOSnJ+5Zns0DZGTpMdxL58HiP3Rdx9993cfffdx9w+ePBgXn311S7vc7rHRER6i7Z15Jzaa9XvLMvC0TZrVcuPiI35vPyIiIj4l2eLLjC0/Ijfme0rcuj8in0pyImIBIlpecZvOVSR87v2CwI7VJETG1OQExEJEtM0PV2rqsj5XYcFgTVGTmxMQU5EJEhM07M9l8NAFTk/sywLg3ZbdAW5PSKBoiAnIhIkbZMdNGvV/8xOFTlVPMWuFORERIKk416rwW6NvVh4zi0cGSOnEyw2pSAnIhIk3oqcYeBW0PAr0zy6ILChoCw2piAnIhIkpmnh9FbklDT8qf2CwA7QZAexLQU5EZEgMS0Lw+GZ8KCuP/+y2m3R5Zm1Gtz2iASKgpyISJCYJjiPTHZwK2n4lWXhHSPn2dlB51fsSUFORCRI3NprNWBMy+qw16oZ5PaIBIqCnIhIkFjeyQ5oML6feSpyno8dqsiJjSnIiYgESYflR0zVjPzJ6lSRU44Tu1KQExEJAtOysPDsteo0DJTj/MtCW3RJaFCQExEJgrYxcQ6jbZ0zBQ1/8qwjd3Syg4Ky2JWCnIhIEHiD3JEFgTXZwb8sy2o3Rk4VObEvBTkRkSBoq8A5HAZOVeT8zmy3ILChySRiYwpyIiJB0NbV5zRUkQuE9gsCOwwDz4hEEftRkBMRCYK2CpzhMDAcqCLnZ1anipxOr9iVgpyISBAcnexwpGtVg/H9qv0YOc1aFTtTkBMRCQL3kSDnPDLZwa2uVb8y0Rg5CQ0KciIiQWC1m+zgcKhi5G+atSqhQkFORCQI2rpWDQNNdgiAzmPkdH7FrhTkRESCwG2161p1GFhowoM/dVwQWFt0iX0pyImIBEH7yQ6OI32Aqhr5j6ci5/lYy4+InSnIiYgEQVtm8+zs4PlY47j8xzNGTsuPiP0pyImIBEFXFTnNXPUfs11FzjC0c4bYl4KciEgQtN9r1Wm0da0Gs0X2YnF0jJxDFTmxMQU5EZEgaL/XqtE2Rk5pw2+sThU5dVuLXSnIiYgEQYeuVUOTHfxNY+QkVCjIiYgEgdlu+RGnKnJ+13mMnCpyYlcKciIiQXC0IoeWHwmAzhU5hWSxKwU5EZEgaD/ZQV2r/udZENjzscMwtNeq2JaCnIhIELg77LXquU1VI/9pv0WX9loVOwsLdgMAzjvvPCIiIoiMjATg1ltvZfLkyWzcuJFZs2bhcrno27cv8+bNIzU1FeCEx0REerq2pUbaT3bQOnL+41l+xPOxJjuInfWYitz8+fNZtGgRixYtYvLkyZimyW233casWbNYunQpEydO5KGHHgI44TERkd7A7FCRa5vsEMwW2Uv7ipwmO4id9Zgg11lubi6RkZFMnDgRgOnTp7NkyZKTHhMR6Q26Wn7EUpLzG9OyvFufqSIndtYjulbB051qWRYTJkzg17/+NYWFhWRnZ3uPp6SkYJomVVVVJzyWlJTk83Ompsb59TV0JS0tPuDPIadO16VnCqXrEldYC0BqaiwteBJHQmJ0jzwHPbFNJ2MYBtHREaSlxRMXG4llWb3ydRyPnV6LnQTjuvSIIPfiiy+SlZVFc3Mz999/P3PmzOHCCy8M+POWl9cFdJZYWlo8paW1AXt8OT26Lj1TqF2XqqoGAKqrG6mrawKgvKKehEhnMJt1jN56Xdxuk2ZXK6WltTQ2NmNa9MrX0ZXeek3sLpDXxeEwjlt86hFdq1lZWQBEREQwY8YM1q9fT1ZWFgUFBd6vqaiowOFwkJSUdMJjIiK9QYd15AwtCOxvnbfo8tym8yv2E/Qg19DQQG2tJ8FalsW7775LTk4Oo0aNoqmpibVr1wLw8ssvM3XqVIATHhMR6Q3aT3ZwakFgv7Msq91kB89tCspiR0HvWi0vL+fGG2/E7XZjmiaDBw9m9uzZOBwOHnzwQWbPnt1hiRHghMdERHqDtqVGnIbhHZSvIOc/ptVxQWDQhAexp6AHuf79+7Nw4cIuj40fP57Fixef8jERkZ6uLVRo+ZHAMC06bNEF6loVewp616qISChqq74ZhqG9VgPA6qIip9MrdqQgJyISBN6u1fZ7rapi5DedFwT23KbzK/ajICciEgRd7eygLbr8p31F7mjXavDaIxIoCnIiIkGgnR0Cy+owRk4VObEvBTkRkSA4WpGj3WQHBQ1/MbtcfiSIDRIJEAU5EZEg6FiR89ymrlX/8VTkPB87VJETG1OQExEJAm+Q67D8iIKGP7QFNlXkJBQoyImIBIHZxTpylhnEBtmI6Q1yns9VkRM7U5ATEQkCd7uuVaehWav+1JbXOlfklOPEjhTkRESCwLIs7x6r6lr1r7bKm8O7/IgqcmJfCnIiIkFgmtYxC9YqyPmHeZyKnM6v2JGCnIhIELjNoxU5p7bo8ivruGPkgtUikcBRkBMRCQLTsnAceQfWXqv+5R0jR+cxcjq/Yj8KciIiQWCZRytFbWO5FOT8w2q3/Rm0HyMXtCaJBIyCnIhIELgtyxs0jk52CGaLAseyrG6thh0dI9fxf42REztSkBMRCQLTtNpV5Ow92eHxRVt5+u3t3fZ8R2etdj6/3dYEkW4TFuwGiIiEItM8tiJn13Xk9hXW4HR2X92gc0VOCwKLnSnIiYgEgWm1q8h5d3awX9AwLYvKWheG0fE1B9LxtuhSjhM7UteqiEgQdJi1auOu1dqGFtymRavborquuVue0zpmjJwqcmJfCnIiIkHg6Vo9+hbsdBi27FqtqGnyflxW3dgtz9l5jJwqcmJnCnIiXfhsWxFvfrw32M2QXqK5xc09f1/Nlr3lPt/HM9nh6OeGYfTaitzL/93Nup2lXR6rqHF5Py6raurya/yt7Ty2nV7tnCF2piAn0oWVmwt5b+0hdcWITwrLG8gvrWf7/kqf7+NuN9kBwOHwrC3X27S6Td5fe4iVWwq7PF5ZG4yKnOd/o9M6ffpxFjtSkBPpQnFlI65mNzX13TOmR3q3wvJ6AEqqfA8qlgXOdgP/e2vXamlVI5YF+WV1XR6vqHUR5jRIiI2gtLp7KnJHFwT2fK4xcvJFVdc3s+OA73+odScFOZFOWt0m5UfG9RRXdk8FQXq3wvIGAEpO4fvFtCyM9hW5Xtq1Wlzhec2lVU24mt3HHK+sdZEcH0laUhRlpxB0v4jOFTktCCxf1Duf7ufP/95IS+ux3+PBpiAn0kl5dZP3F0FxRUNwGyMd7DpUxd6CmmA34xiFR75PPNUp38KC27RwtgtyvXWMXFG7n5GCI5XJ9ipqmkiJjyItMZqybqrIecfIHbOOXLc8vdjQ/uJa3Kbl/aOtJ1GQE+mkfRXuVLrKJPD++Z8dPLd0R7CbcYyiIwHG1eJ7d3z7nR3A07XaG/daLa5s8Aam/NKugpyLlIRIUhOjqKhx4TYDPxCw7TQenbWqrtXeoKGplY825tPq7lmDRU3L4lCJZ+hAV9/jwaYgJ9JJSaXnL67YqDBV5HqQhqZWiisaOFxS32UXXrCYpkVRRSN9+8QCvof/zrNWHb01yFU0MDAzgTCn45hxcqZlUVXnIjk+irSkaO/iwIHWeUFgh7drNeBPLaepus7F3H+t57klO487AzpYSqsave85+WUKctKLuU2zx/2lFAgllY1ERjg5MztRY+R6kIPFtYAnHOwr7Dndq2U1TbS6TUYPSQV8HyfnWRC40xi5Xpg0iisbyU6NIbtPzDHVipr6Ztym5a3IQfcsQeIdI3fkc1Xkeraahmb++MI6iisbCHMaPernG+BQsecPlPAwB/mlXU/qCSYFOfHZE4u28tBLG2z/ZlhS1Uh6UjQZydGUVPo+5qm7uU2zV46pOl37i2q9H+8pqA5iSzpq61YdNSgVw/gCQc7R+wbju5rdVNa6SE+JoW+fuGOqFW1ryHnGyHmCXGk3LEGiLbp6lzXbSyitauKWK8YwMCvB53GwlmWxZPXBgFfJDhTX4nQYjBqUooqcHPXB2kPc//zaXvMXuKvZzca8cnYdru6Rg839qaSykfTkaDJSYnC1uKnugUuQmJbF7H+s4ZVlecFuSrfZX1RDakIkmSkx7MnvOd+DbYOf+6fHkZoQRekpda12nrUakCYGTPGRYQiZKTH0S4ulstZFQ1OL93jbGnLJ8ZGkJERhGJ7JRIFmecfIef7vzorc8+/tZP5rm0/5fiVVjby1cl+PCPOWZXXrH7Bb91WQlhTF8AHJnJmVwIHiWp96fwrKG3jlwzze+XR/QNt3qKSOrNQYBmbGU1bdRFNza0Cf71QpyAVBXWMLb67Yy578GnYeDM66NJZl8fe3t7FmR4lPX7/tQAWtbhMDeH/tocA2LggKyuo5WFyLaVqUVh0JcsnRQM+cubonv5qCsnqWbyyg0dWz3lQCZX9RLQMzExjcN4G8/OoeUyktLG8gLjqcuOhw0pOjjxkjZx7nl6Jp0qki1/vWkWsbepCRHE32kTGC7SsW3opcQiRhTgfJ8ZGUnkLXakFZPUtWHzzlJR/M41TkAn16q+tcfLyxgI15Zd6xtr56e+V+Fq7Yd0qLSgfKQy9v5MnF27yfL1yxl399sCsgz9XqNtl+sJJRgzxDE87MTqCl1eSwD12Y63d5xtJt2lNGS2vghv0cLK6lf3o8fdPiACgo61m/ExTkAqymvtn7V2ub/3x2gCaXm4gwB59tKz7tx/4i49W2HahkZW4Rr32U51NVcFNeOVERTs4b3491O0u7ZcBydzFNi/mvbebP/95IYUUDbtMiIzmG9JQY4NTWkqusdbH084MBH0u4elsxDsPA1eJm1daigD6Xv1mWxdodJadU6WxoaqGkspEzMuMZ3DeRukbP5y+8t5N/vLM9qKGuqLye7FTP90p6UnSHrtVdh6r4zf9byVsr9x9zP9PqVJFzGFg+/Cx+vKmA+a9tPuWg0Fl1nYsX39/1hSrObX/kpCdH0zftSJBrN06ustZFeJiDuOhwADKSY9i8p4xPcwtPes1My+Kpxdt45cM8/vjC+lNag+7oOnKe/x3dVJFbvqkAt2lhAKu2+v7e7mp2s2an54/qT46zQ0Z32V9Uw/YDlXy+vZiKmiZqG5p597ODfLD2cEC6FfMOV+NqdjNqUAoAZ2YlALDPh56fDbtKiQx30uhys93HxXrf+mQf//nsgM/tq6lvpqqumQEZce2+x3vWODkFuQAqr27kvmfXMOeZNVTXeYJPZa2LD9Yd5ssjM5g4Ip21O0tPa4HBT3MLueEvH7Nh99HZPc0tbooqGthXWHPScPb+mkM4DIPSqiY27Sk74ddalsWmPWWMGpTChef0xzQtlq0/fNyvr6x1setQVZfHWt0m2w9UsnxjfrcsQ+CLdbtKKalqpLahhdc+9HRVpiVFk5oQidNhHBPEj8fzi2cr/16Wx3/XHf/8fFFu02TNjhLGD0/jjIx4PtqQH9BfUK1uk6KKhmO6fDbuLuOep1d7JyH4as2OEh5dmMtjC3MxLQtXi5tHF+ayZNX+497nwJHxcQOz4hmSnQjA0+9sY9n6fD7ZUsiWvRWn1IZTZVoWewqqu/y5KqxoIDPV8waflhxNXWMLDU2trMot4qGXN1Db0MK7nx2gqq7jHz9mp3XkfFkQuKSqkRff38XGvDJm/2MNKzYVnPZreuG9Xfx33WH+/d/dPt+n8x8oxRUNJMdHEhURRmpCFFERTl75MI/bH/uUJ9/ayt6CalLiI72VsRkXDCU9OYan397O/Nc2n/A9YFVuEQeKa/nW+L6UVDYy59m1PnfLHt3ZoWNFrrLWxZ//vdFbyfEnt2myfGMBIwelMHxAEqu2Fp3w57Kipol1O0uwLIt1u0pwNbsZmBnPup2l1LfrnvZVU3Mrry/f4/P71fEsW59PeJgDy4IVmwtZsbmQVrdJmNPBklMIQL7K3VeB02Ew4oxkAFITo0iICT/pEJ6Kmib2F9UyddIAoiKcrN/lCcItre7jnvdt+ytY+Mk+Xv1oD59s9i0wHyzxvPcMyIgnLTGaiDBHjxsnFxbsBnwR+/bt44477qCqqoqkpCTmzp3LwIEDg90sABpdrTz03FrqGltxmyb/XpbHzy7J4dklOzBNi8snn0lJZQOf5haxeU85E4an+/zYW/dV8M93d2BaFs/+ZwdD+iaSd7iaJ97aSvOR8vJXRmZyzbQc7xtoe4Xl9WzeU860rw7k09xCPlh7mHFD07zHC8rqycuvpriygX5pcWSlxlBd18zowX1IT4pm/PA0ln5+iLPOTGVY/yTv/UzT4r/rDvPGir24mt18d/Igpk4awFsr9/PZ1iLAoMHV6u0K3HmwimumfalD95KvKmqaSIiNIMzpoLahmVc/2sPoM1OZOML38wieN/x3PztARnI0SXGRbNrj2fQ8Izkap8NBn04VlhP5eGMBOw5WkRwfyaJP9jHpSxkkxUUe83Vmpz02T9X2/ZXUNrQwKSeDusZmnl2ykz35NQzpl3jaj9kVy7LYvKecfy/Lo6iigYSYcMYM6cM3x/XF1ewJX61uk8cW5jLrqrOJjvS8nZRVN7Ipr5wxQ1Lpkxjd4THrGlv41/u7iIsOZ9ehKpZvyGfHwSrW7ihh7Y4SZk4ZzjfH9QWgqs7Fxt1lpCZGcfjIGk4DMxOIiQwjOtLJnvwaJo5I52BRLa98mMfIQck4Hcf+bWpZVoefA8uyKKpooKLGxfABSYQ5T/z37OGSOp5dsoM9BTVMGJ7G/146kvAwz3027ymntqGFLG9FzvP/Gx/vYdn6fEYMSOLK84Zw/3PreGfVAX504TDveWhwtXbc2eEkXauWZfH80p04HQa3/2QCry/fwz//s4OSqkYu/vIZvLF8LzsOVfLLy0bS70gX0PFs2F3Kul2lZKXG8Nm2Ys4dm83wAcknvM/6XaU8/fY2Jo5I56qpI3A4DIoqG7xDEAzD4EcXDmP34WqamlvZmFdGU7ObEQOOvkf0TYvjrpkTeH/NIf69LI+X/5vnPSftuZrdvL58D4OyEvjxhcO4YEI/7nt2LU8u3srtM8Z1eZ3bO96CwK8v30N9Uyvb9ldw1dQRTB6TfcLHORUbd5dRWevix98eRl1DC//8zw72FtYwOPvYn8uGphYeenkjRRUNXPmtIWzZW05aUhQ/mTKc+55dy+fbivnW+H4dX5NpUVzZQMaRnoLOx558axsb88pYv6uUe346EVezm5eX5dHoaiU+OpyvnpVFzhnJNDS1svjTfQxIj+fLIzM6/GzUNbawelsxXx2VSVl1Ex9vKsDpMBjeP4n+GXF8uD6f70w+0zsDuU2jqxVXi7vL97uTyd1XzuC+id73D8MwGJSVwN6TzFzdsNtTgDgnJ52iigbW7yrja2dV8bfXt5CaGMXVF41gQEa89+tbWt08v3Qn6UnRpCZG8dzSHfRJjPIGyK5U17n47EhltX96HA6HQVZqrIKcP82ePZsZM2Zw+eWXs2jRImbNmsVzzz0X7GYBsPjT/RwoquVXPxhNXn41b63cT3FlI/sKa/jJlOGeb6aESBJiwlm2Pp+ahhZKKxuprnfR6HJ7uiNiwhkzuA85ZyRT19jC4dI6tu6rYPmmArJSY/nxt4cx76UN/OWVTRwqqWNARjwXTOjHoZI6lnx+kKS4CPqnx7F6WzGmBXHR4WT3iWF/YS1hToPzJ/QjKsLJax/tYcOuUjBgxaZCNuZ5fkAMw9NFkRAbgQGcNdgzhuGnU0fwx+fXMf+1zdzxo/H0S49jf1ENzy7ZyYGiWs46M5WYqDDeXLGPD9YdprahhTGDU4mNDicizMHIQankl9WxcMU+XC1uMlNiaHC1kpkSQ3afWJwOg+ZWk4KyekoqG+mfHseIM5KJDHNQWefi7U8PsGVvORnJ0Xz77P68+9lBymua+GRzIRd/+QwGZyew42AVcdFh5AxMISU+EtO0qGlooay6kfTSehKjwkiMi2DnwSoOFNUyc+pwMpKimffyRsKcDpLiPW9IGcnR7D5UxUsf7CY5PpKcM5JJiI1g854yyqqbGD4giayUWPYV1vDKh3nknJHMzCnDuefvq3llWR4/mTKcqAgntY0tHCyuZdm6fDbtKWNARjxjh/QhLSmK6MgwauqbKa9pory6iaq6ZrJTYxk+IInSqkZ2HqqiT2IUowf3ITE2guUbC4iOdDJ6cApu0+KVD/N4bFEu5433BKA1O0pwGAZnj0hn5KAUMlJiKCir58MN+VTXNTOsfyIDMuKJjggjKsJJZISTqAgnURFhuE2T4grPc36yuZDDpXVkpMQw/fyh7Cus4fMdJazYXIjDMMhMjeG7k8/k0YVb+Me72xk5KIWdB6tYs70E07J49UMHl3x1IKPPTCUxLoLahhbe/nQ/9U2t3PPTifx7WR4vvL8Ly4LvTh7EobIGnlu6k5W5hbia3eSX1tMWayLCHPRJjPJ20Y0cmEJZdRM/vySHLXvKeXRhLotX7mfEgGRiosLISo2loKyelz7YxYGSOsYMTmVgZgIHS2rZfajauwVbXHQ4Z49IJyM5mriYcFwtJnWNLZRWNVJa2UhpdSOVNS5io8M5d2w2yzcW8JfGjXx5pOcX3Tur9tMvLY6vjMwEPF2M4KlqjDozhRu/N5rwMAdfH53F8o35DMiII+9wNau3F9PcYjIo8+gvGodhUNvQwqGSOgw8e5S63Sax0eE0NbeSu7eCrfsq+NGFwxjcN5HfTB/LC+/t4p1VB1i2/jBNLjfRkWH86YX1/GTKcEqrGimubOBLZ6QwekgqsVHhWJbFweI6Xnx/F33TYrnzRxOY/Y/VvPj+Ln504TCcTgcVNU1U1LiIjQojKT6SyMJaNm4vZsnnB0lNiOSTzYU0t7jplxbHoeI6vjIq0/savnZWFl87KwuA+qYWVmwq5Ix2r7HtdU45ZwCVtS7eW3OI6vpmisobaHC1cEZGPNGRYeQdrqaqrpnrvjMKw/D88pw5ZThPLt7GG8v3cslXziDmyOspqWxkZW4huw9VM3xAEuOGplHf2HrkPazjgsD1Ta1c8a3BbNtfyT//s4MPN+QzpG8ifRKjiI0OJzYqnJioMMqrmzhUWkdEmIOMlBhM07P+XVFFA/ll9STERDB2aB/iosPJL61j9+Fqdh+uIjUhijGD+9DU7OaF93fx7qoDTPpSBhHhTuJjwomPiSAuKpzH39pKaVUjw/sn8cqRXoDvfH0QAzPj6ZcWx0cbC46EJYPE2Ajqmlp4ZVkeh0rqGNIvkZt+OI6YMAMsT5VxyecH2ZhXxuTRWXyypZDHFm6loKyO2sYWMlNi2FtQw8rcIr48MoNdh6q8YxfX7Cjh7Jx06htbiIoI41BJHS2tJt8a15fSqkYWvJkLwBXfGsKZWQl8uD6fZ5fuYNyQPkRFhuF2W+wtqGbV1mKaW9187awsppzdnzCng8KKBlZvKybvcDVgERkRxllnpjB2SB/6psURExXGoeI6DhbX8b1vnNnhe+TM7AQ27Slny95yisobiIkKIyUhCiyLFrdFdKSTNduLyUqNISs1lgnD0li9rZi5L24gJSGSypom5jyzlnNy0pkwPI2oiDBWbyumuLKRX1/pmRk755k1PPjSBrJSY8g5I5nUBM/7S1iY52dgy94Kdh+uwrJg4vA073tP37RYNu8pZ1NeGWFhDtbtLKWuwfO9GiyG1VNGDJ+i8vJypkyZwurVq3E6nbjdbiZNmsR7771HSkqKj49RF7BZowVl9URGR5AaG05Lq5t7nv6ckqpGfnThMM6fcPQvrZc+2O2dPBDmdJAYG0FMVBitbpOKWheuZjcGeH+hhTkdfGlgMj+dOoLk+EgWf7qfNz/ey9B+ifzqijFER4ZhWRbPLtnJx0e6Xdq+QWsbm70/wF8fncXPLs6hrrGFWxes9FbyYqPCuHBifyaNzCA1IYrlGwt49aM8zsiI584fT/C2u6yqkfufX0d1fTNx0eHUN7UQHxPBjAuGcvaIdCzgzY/3sn5XKdPPH8pZZ6Yec44Wf7qfhSv24nQ4iIpwUtd4bHdCTGQYDZ0G88dGhXHu2L5s2F1KYbmna+e6y0exMreQ5Rs9rzk8zOHz4NeEmHDmXf9VwpwOHnhxPS0tJrOvPhuAFZsKeGPFXpqa3ccsQtv+uoAnFNzz04mkJUXz+vI9vLPK0w3RFogB4mPCvVWkPZ26DgwDUuIjSYiNIL+03ntNMlJiqKxtornl6OuZPDqLqy/OAWDnwUreWrnfO0ZkcN8ETNNiX2HHLs/IcCcpCZE+bzEzKCueyWOy+fpZWd6qVaOrlU+2FLLrUBUzLhjm+R5cuY83V+wDPNfr66OzODsnnSWrD3a5sOdlXxvIdyafSUlVI7//5xrGDunDNdNySEqO5ZGX1lNS2UBURBgDMuKYOCKdnQerWPTJPsYPS+Oqi0YARycQOB0OLMviwX9tYGe77vy2XRLiYsIZNSiVLXvLqWtsISkugsHZ7R9gmgAAFfJJREFUiXxpUApJsRF8urWILXvKvee6TWJsBGnJ0aQlRpOVGsO5Y7OJj4ng09xCnvnPDlrdngs6cUQ6P784h8gIJ+CpJN34yAqG9kvk5h+MJiLcc3tFTRN3PLGKVrdFVISTCcPTmHLOgA6Vs7++uonNRyrCXTGAMUP6cMP3zvJWdC3LYvGn+9m4u4zp5w8lJT6SP7+yyTt2LTYqjPomz89PXHQ4YU6DqrpmwsMc3P4/4xjcN5H1u0r5f29sOen3w1dGZvLTqcP5YN1hXvtoD4DnD5epw8lIPrZKdDKmafG31zez7UAlQ/omkhgbwYHiWhpcrQzKTGD8sDS+Pjqrw33+/vY2VuZ6xoRGRjhpbnFjWZ6fnew+sRS0C/8Av50xjuEDkskvq+eep1czenAqN/9gNK1uiyWrD7B1fyX7Cmu6fK8Icxq43VaHx0uOjyQ7NYbSqibvpBYDyE6L5axBqUwek0XWkW72pxZvPeE4uZlTh/O1UZn8+d+byDtczZ9++WX6JEXzwdpD/OuDY7u7UxMi+eqoLD7ckN/le+X54/vxo28P451V+3l9+V6S4iK4+QdjOCMznuYWN4s+2ceSzw+SkRzDzy/JIS+/mteX7z2mu3xov0Tu/PEEWt0mtz36KRgw7zrP++Pry/fw7qoDHc5JeJiDSTkZREU4+XBDfoeqcmxUGCMHpRAe5qCqzrPpfNtxZ7sK9L1Xn92herZ1XwV//vfG4567Npd85Qy+f+5gmppb+c2CT0lPjuaWK8bgcBgsWrGPz7YVeb//Ab4xJourLvK8b9Y2NPP59hLW7SzhQHHdMZPGBqTHMXpIKpO+lOld7Bs8w5qefnu79/OIcAdfG5XFj749jIz0BEpLT22oia8cDoPU1K4r7b02yOXm5vLb3/6Wd955x3vbxRdfzLx58xg5cmQQW9a1/NI6CsvqmZiT0eH2RlcreYeqyEyNJTUxqkOXW0urmw27Stmxv4LUxGj6pcUxfGAyURFHC6lut8nn24oZNyyNqMiOt7+zch9nZCZw1pA+3setbWhmf0ENg/slEhPl+Qtjx4EKKqqbSEmIYmBWQofHAaiqdWEYkNipbF5UXs+nmwsoLG8gNiqMH5w/zPtXi69cLZ5JH4ZhUF3n8nahOZ0G/dLiiI0Op6i8gW37yjFNi8gIJ+NHZBAXHX7ktReRMzDVW0HbuKsEp8PBiIGeLoTcveXUNbTgdEBCbCRpydE0NLWyr6Ca2vpmLGD0kD6MGtzHe36aW9ykduoSBM8v4827PRM9xg1PJzMlhty95RSV1zO4bxKD+yV6f3G73Saf5RZRXNFAXWMzSfGRZKbGMnZomvdr6htbqK5zHQkYngVTnUcCU0urmz2Hq0lLjiY1MZrmFjfb9pXT1OwmItzJiDOSvdevTWFZPU6H4Z2kUVRez+6DVeSX1REfE8E3x/cjNjqcqloXBWWeN64ml5tGVwuNLjeNrtYjvxDjGJid4J2hdTKmabF9fwV9kqJJT47u0FVzoLCG/NI6KmuaSIiLpG9aHIOyE7xf09DUQnRkWJdDADo/hwUdxpS119ziZvehqiOVkyb2F9YQ7nRw2TcGExsdTqvbpK6hxft90p5lWdQ1tlBb30x0ZBgx0eFEHrlGXWlytVLT0IxpWmSkxBzT9sKyevokRXu7X9vsPFCB27QYNiC5y+7cJlcr+wtrKKtuxLI84zTDwhzU1jcTEe5kUHbCMde8K3UNzWzdW86wAckkxkWy61Alm3d7ZlE2NrUyZlgak0Zmdvh5PlRcS0V1Ey1uk9REzy4M9U2tVFQ3ERXpJCk+kuT4o91pm3aVkpoURb//396dR0VV/n8Af8PggIiyGTCCYtlXwiyZhiUVV1QMwSU1PSV1xPRwQEk9dA6auZGdSMMUMbNQWzwtormgxzQ0LSUURQ1/buAS4gAKqMAAAzPP7w/y5iQQyDKOvl9/Mfe59z6fez/DM595Lpfr1LGuEBpNr/+7IP+Py9v36HR6nLhQiOsFpSi6UwkrSwvYd7RE3xcUcLRtj5K7lTidfQt3y2u/sL7S92m0szCHXi+QcuQyBr/UFZ06yB+IQVNVgzKNFnfLtSirqIajrRXcnrKBTi9QUKyBTGYGR9v20vtCCIHrhWW1M5POHet8v9To9Cgo1qBGp699z5RrcadMi7vlVehs1x4DlbVf6KuqdSgs1qDr34WMTi+Qc/323/8CpHbc0Vbr8PILCljJLVCq0eJARi40lTXQ6wUcba3g+pQNnn/GUXo6yOHM63jh2c4PjGOFJRrY2VhKY9Dt0iqUVWjR0VoOTWUN8ovK4a7oVDv7BeDs5SKYmQG9nv7ni7hOp8ftsipUanWQmZuhUwe59L7MLypHVk4RLGRm6GRjiRd6dDb4PSivqMbZK0W4cbMMt0ur4ObUER7u9tKxS33oBfYevYKn7K3xv652qKiqwc3bFTA3N0M7C3NoKmpQXlENlaeT1HfJ3UrYWMsN+tPp9Pi/q8UQQqCbcyfY2sjrHWs0ldUo1VSjukYHm/byOseK+4/jWv5dlFdU44VnOxt8JhvDE13IteaMHAA89VTHVqvO6eExL48m5uXRxLw8epiTR1Nr5qWhGTmTvWtVoVCgoKAAOl3t5S6dTofCwkIoFIr/2JKIiIjo8WCyhZyjoyM8PT2RkpICAEhJSYGnp2ej/z6OiIiIyNSZ9F2rixcvRkxMDNauXYtOnTohLi7O2CERERERtRmTLuR69OiBLVu2GDsMIiIiIqMw2UurRERERE86FnJEREREJoqFHBEREZGJYiFHREREZKJYyBERERGZKJO+a7W5zOt53I+p9UFNx7w8mpiXRxPz8uhhTh5NrZWXhvZrso/oIiIiInrS8dIqERERkYliIUdERERkoljIEREREZkoFnJEREREJoqFHBEREZGJYiFHREREZKJYyBERERGZKBZyRERERCaKhRwRERGRiWIh1wquXLmCSZMmITAwEJMmTcLVq1eNHdJjo6SkBNOnT0dgYCBCQkIwc+ZMFBcXAwBOnTqF0aNHIzAwEGFhYSgqKpK2a402qtuaNWvg4eGBixcvAmBejK2qqgqLFi3CiBEjEBISgvfffx9Aw+NUa7SRoYMHD2Ls2LEYM2YMRo8ejX379gFgXtpaXFwchg4dajBmAW2fh2blSFCLCw0NFdu3bxdCCLF9+3YRGhpq5IgeHyUlJeKPP/6QXn/00Udi3rx5QqfTiWHDhonjx48LIYRITEwUMTExQgjRKm1Ut6ysLDFt2jQxZMgQceHCBeblERAbGyuWLVsm9Hq9EEKImzdvCiEaHqdao43+odfrhbe3t7hw4YIQQohz584JLy8vodPpmJc2dvz4cXHjxg1pzLqnrfPQnByxkGtht27dEiqVStTU1AghhKipqREqlUoUFRUZObLH0969e8Vbb70lTp8+LUaNGiUtLyoqEl5eXkII0Spt9KCqqirx2muvidzcXGlQZF6Mq6ysTKhUKlFWVmawvKFxqjXayJBerxe+vr4iIyNDCCHEsWPHxIgRI5gXI7q/kGvrPDQ3RxZNn4ikhqjVajg7O0MmkwEAZDIZnJycoFar4eDgYOToHi96vR7fffcdhg4dCrVajS5dukhtDg4O0Ov1uH37dqu02dnZtc1BmpBVq1Zh9OjRcHNzk5YxL8aVm5sLOzs7rFmzBunp6ejQoQPeeecdWFlZ1TtOCSFavI1jnyEzMzN8+umniIiIgLW1NcrLy7F+/foGPz+Yl7bT1nlobo74N3JksmJjY2FtbY0pU6YYO5QnXmZmJrKysvD6668bOxS6j06nQ25uLnr16oVt27YhOjoas2bNgkajMXZoT7Samhp8/vnnWLt2LQ4ePIjPPvsMs2fPZl7ooXBGroUpFAoUFBRAp9NBJpNBp9OhsLAQCoXC2KE9VuLi4nDt2jWsW7cO5ubmUCgUuHHjhtReXFwMc3Nz2NnZtUobGTp+/DhycnIQEBAAAMjPz8e0adMQGhrKvBiRQqGAhYUFgoODAQB9+vSBvb09rKys6h2nhBAt3kaGzp07h8LCQqhUKgCASqVC+/btYWlpybw8Ahr6HG+NPDQ3R5yRa2GOjo7w9PRESkoKACAlJQWenp6cwm5B8fHxyMrKQmJiIuRyOQCgd+/eqKysREZGBgDg+++/x8iRI1utjQzNmDEDv//+Ow4cOIADBw7AxcUFSUlJePvtt5kXI3JwcICfnx+OHDkCoPbOuKKiInTv3r3ecaqhMexh28iQi4sL8vPzcfnyZQBATk4OioqK4O7uzrw8AlrjXLdmjsyEEKKlT8KTLicnBzExMbh79y46deqEuLg4PPPMM8YO67Fw6dIlBAcHo3v37rCysgIAuLm5ITExESdPnsSiRYtQVVUFV1dXLF++HJ07dwaAVmmj+g0dOhTr1q1Dz549mRcjy83Nxfz583H79m1YWFhg9uzZGDRoUIPjVGu0kaGdO3fiiy++gJmZGQAgKioKw4YNY17a2AcffIB9+/bh1q1bsLe3h52dHXbv3t3meWhOjljIEREREZkoXlolIiIiMlEs5IiIiIhMFAs5IiIiIhPFQo6IiIjIRLGQIyIiIjJRLOSIyKTExMRg5cqVRulbCIF58+bBx8cHEyZMMEoMrWnhwoVITEw0dhhE1AR8sgMRNcvQoUNRUVGB1NRUWFtbAwC2bNmCnTt34ptvvjFydC3rxIkTOHLkCA4dOiQd6/22bduG9957T/ofh/b29vDz88OMGTPw9NNPN6qPmJgYODs7Y86cOS0ae2MsXbq00esaM04i+gdn5Iio2fR6Pb7++mtjh9FkOp2uSevn5eXB1dW1ziLuHi8vL2RmZiIjIwObNm2CpaUlXn31VVy8eLG54RIRPYCFHBE127Rp07BhwwbcvXv3gbbr16/Dw8MDNTU10rLQ0FBs2bIFQO0s1uTJk/Hhhx/C29sbAQEBOHnyJLZt24ZBgwahb9+++Omnnwz2WVJSgqlTp0KpVGLKlCnIy8uT2nJycjB16lT4+voiMDAQe/bskdpiYmKwaNEiTJ8+HV5eXkhPT38g3oKCAoSHh8PX1xfDhw/Hjz/+CKB2lnHBggU4deoUlEolVq9e3eA5kclk6NatGxYvXgxfX1+sWbNGaouKikL//v2hUqnwxhtv4NKlSwCAH374Abt27UJSUhKUSiXCw8MBAOvXr8ewYcOgVCoRFBSE/fv319tvQkICoqKiMHv2bCiVSowbNw7nz583OD+hoaHw9vbGqFGjkJqaanB+7l22Tk9Px8CBA7Fhwwb07dsX/v7+2Lp163/GOWDAACiVSgQGBiItLa3Bc0REzcdCjoiarXfv3vD19UVSUtJDbX/mzBl4eHggPT0dwcHBmDt3Lv7880/s378fy5cvx9KlS1FeXi6tv2vXLkRERCA9PR3PPfccoqOjAQAajQZhYWEIDg7G0aNHsXLlSixZsgTZ2dnStikpKQgPD8fJkyelh5bfb+7cuXBxccFvv/2G1atXIz4+HmlpaZg4cSKWLFkizbhFRUU1+viGDx8uPRsWAAYOHIiff/4ZaWlp6NWrlxT/pEmTEBISgmnTpiEzMxPr1q0DAHTt2hWbN2/GiRMnMHPmTLz77rsoLCyst7/U1FSMHDkSx44dQ3BwMCIiIlBdXY3q6mqEh4ejf//+OHr0KBYsWIDo6GjpmZ//duvWLZSWluLw4cNYtmwZli5dijt37tQZ5+XLl7F582YkJycjMzMTSUlJcHV1bfQ5IqKHw0KOiFpEVFQUvv32WxQXFzd5Wzc3N4wfPx4ymQxBQUFQq9WIjIyEXC6Hv78/5HI5/vrrL2n9wYMHw8fHB3K5HHPmzMGpU6egVqvx66+/wtXVFePHj4eFhQV69eqFwMBA7N27V9o2ICAAKpUK5ubmsLS0NIhDrVbj5MmTiI6OhqWlJTw9PTFx4kTs2LHj4U8MACcnJ9y5c0d6PWHCBNjY2EAul2PWrFk4f/48SktL693+lVdegbOzM8zNzREUFAR3d3ecOXOm3vWff/55jBw5Eu3atcPUqVOh1Wpx+vRpnD59GhqNBjNmzIBcLkffvn0xZMgQ7N69u879WFhYIDIyEu3atcOgQYNgbW2NK1eu1LmuTCaDVqtFTk4Oqqur4ebmhm7dujXyDBHRw+LNDkTUInr27InBgwdj/fr16NGjR5O2dXR0lH6+d6NA586dpWWWlpYGM3IuLi7Szx06dICtrS0KCwuRl5eHM2fOwNvbW2rX6XQYPXq09FqhUNQbR2FhIWxtbWFjYyMt69KlC7Kyspp0PP9WUFAAW1tbKZ6VK1di7969KC4uhrl57ffpkpISdOzYsc7tt2/fjo0bN0qXkDUaDUpKSurt7/7zY25uDmdnZ2kGz8XFRerz3vEVFBTUuR87OztYWPzzMdG+fXtoNJo613V3d8f8+fORkJCA7Oxs+Pv7SzdEEFHrYSFHRC0mKioK48aNQ1hYmLTs3o0BlZWVUoF08+bNZvWTn58v/VxeXo47d+7AyckJCoUCPj4+2Lhx40Pt997MWVlZmRSrWq1udjHyyy+/SMXlrl27kJqaio0bN8LNzQ2lpaXw8fGBEAIAYGZmZrBtXl4eFixYgE2bNkGpVEImk2HMmDEN9nf/+dHr9SgoKICTk5PUptfrpWJOrVaje/fuTT6mf8cJACEhIQgJCUFZWRkWLlyIFStWYPny5U3eNxE1Hi+tElGLcXd3R1BQkMG/HXFwcICzszN27NgBnU6H5ORk5ObmNqufQ4cOISMjA1qtFqtWrUKfPn2gUCgwePBgXL16Fdu3b5f+JuzMmTPIyclp1H4VCgWUSiXi4+NRVVWF8+fPIzk52WBGr7F0Oh1yc3MRGxuLY8eOITIyEkBt4SmXy2Fvb4+KigrEx8cbbOfo6Ijr169LrysqKmBmZgYHBwcAwNatW6WbI+pz9uxZ7Nu3DzU1Nfjqq68gl8vRp08fvPjii7CyssKXX36J6upqpKen48CBAwgKCmry8f07zsuXLyMtLQ1arRZyuRyWlpYGM39E1Dr4W0ZELSoyMvKBy2+xsbFISkqCn58fsrOzoVQqm9VHcHAwEhMT4efnh7Nnz0qzPjY2NkhKSsKePXswYMAA+Pv7Y8WKFdBqtY3ed3x8PPLy8jBgwADMnDkTs2bNQr9+/Rq9/b27WlUqFd58802UlZUhOTkZHh4eAICxY8eiS5cuGDBgAEaNGgUvLy+D7SdMmIDs7Gx4e3sjIiICzz77LMLCwjB58mT069cPFy9exEsvvdRgDAEBAdizZw98fHywY8cOJCQkoF27dpDL5Vi3bh0OHz6Ml19+GUuWLMHHH3/c5EvhdcWp1WrxySefwM/PD/7+/iguLsbcuXObvF8iahozcW8+n4iITF5CQgKuXbuGFStWGDsUImoDnJEjIiIiMlEs5IiIiIhMFC+tEhEREZkozsgRERERmSgWckREREQmioUcERERkYliIUdERERkoljIEREREZkoFnJEREREJur/ARn16R+hSuPIAAAAAElFTkSuQmCC)![download (4).png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAnIAAAFSCAYAAAB2ajI+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nOydeZQcZb33v1XV2+yTmQzJJASiEEnEy5YILyqiAU0EhOuCcDlXrvt1YRFErrxyiQfNVZYLBl8EBK8IRrkCQtgkgICGLRCSELIvZJ/JZPalp9eqev+ofqqre7pnerqnuqp6vp9zcjLTNd39dFX1U9/6/pZH0nVdByGEEEII8Ryy0wMghBBCCCHFQSFHCCGEEOJRKOQIIYQQQjwKhRwhhBBCiEehkCOEEEII8SgUcoQQQgghHoVCjhDiKDfccAPuvPNOp4dh0tbWhpNPPhmqqtr+Xl/+8pfx8MMP2/4+hJDKhUKOEGIbCxcuxAknnICTTz4ZCxYswMUXX4w//elP0DTN/Jsbb7wR3/ve9xwcZSYzZszAunXroCiK00MpijfeeANf/vKXMX/+fCxcuHDE9gMHDuDLX/4yTjzxRCxevBivvfZaxvb7778fH/3oR3HKKafguuuuQzweL/i5hJDyQyFHCLGVu+++G+vWrcNLL72Eb37zm7j33nvx4x//2OlhVSzV1dX4whe+gGuvvTbn9h/84Af44Ac/iNWrV+Oqq67CFVdcgZ6eHgDAqlWr8Jvf/Ab3338/XnrpJRw4cAB33HFHQc8lhDgDhRwhpCzU1dXhrLPOwi9/+Us89thj2L59OwDgRz/6EW6//XYAwOrVq/Hxj38c9957L04//XR87GMfwwsvvIC///3vWLRoEU499VTcfffd5mtqmobf/OY3OPvss3HaaafhyiuvRF9fHwDDPTruuOPw2GOP4ROf+AROO+003HXXXeZzN2zYgM9//vM45ZRT8JGPfAQ///nPM56XTCYBAB0dHfj2t7+NU089FZ/61Kfw5z//2XyNX/3qV7jyyitx7bXX4uSTT8a5556Ld999N+8+ePXVV7F48WLMnz8fN954I+xYWOeEE07AP//zP2PWrFkjtu3evRubNm3C5ZdfjlAohEWLFuEDH/gAVq5cCQB4/PHH8cUvfhFz5sxBQ0MDvvvd7+Kxxx4r6LmEEGegkCOElJUTTjgB06dPx5o1a3Ju7+rqQiwWwz/+8Q9cccUVuP766/HEE0/g0UcfxfLly/HrX/8a+/fvBwA8+OCDeOGFF/CHP/wBq1atQkNDA2688caM13v77bfx7LPP4ve//z3uvPNO7Nq1CwCwdOlSXHrppVi7di2ef/55fOYzn8k5nquvvhrTp0/HqlWrcMcdd+C2227D66+/bm5/8cUXce6552LNmjVYuHAhfvrTn+Z8nZ6eHlx22WX4/ve/jzfeeANHHXUU1q5dm3c/Pfnkk1iwYEHef21tbfl3ch527tyJWbNmoba21nxs7ty52LlzJwBgx44dmDt3rrntuOOOQ1dXF3p7e8d8LiHEGSjkCCFl54gjjkB/f3/ObT6fD9/5znfg9/txzjnnoLe3F5deeilqa2sxZ84cHHvssdi2bRsA4KGHHsJVV12F6dOnIxAI4LLLLsPKlStNNw0ALrvsMoRCIcydOxdz587F1q1bzffZt28fenp6UFNTg5NOOmnEWNrb27F27Vpcc801CAaDmDdvHi688EKsWLHC/Jv58+fjzDPPhKIouOCCC8zXz+Yf//gH5syZg8WLF8Pv9+Pf/u3fMHXq1Lz76LOf/SzWrFmT99+MGTPG3tFZhMNh1NXVZTxWV1eHcDgMABgeHs4QauJvw+HwmM8lhDiDz+kBEEImHx0dHWhoaMi5rbGx0Sw0CIVCAIDm5mZzezAYNMVDW1sbvve970GW0/eksiyju7vb/N0qlqqqqjA8PAzAcOTuuOMOfOYzn8GRRx6Jyy67DJ/85CczxnL48GE0NDRkiJsZM2Zg48aNOV8/FAohFoshmUzC5/ONeK3p06ebv0uShNbW1pz7wC5qamowNDSU8djQ0BBqamoAGPl11u3i55qamjGfSwhxBgo5QkhZ2bBhAzo6OjB//vySX2v69On4r//6r5yvdeDAgVGfO3v2bNx2223QNA3PPfccrrjiCqxevTrjb4RzODQ0ZIq59vZ2TJs2bdxjbWlpwaFDh8zfdV1He3t73r9/4oknsGTJkrzbn3766XG7csceeyz279+f8Xm2bt2K8847DwAwZ84cbNu2Deecc465berUqZgyZcqYzyWEOANDq4SQsjA0NISXXnoJV199Nc4//3wcd9xxJb/mv/zLv+CXv/wlDh48CMDIQ3vhhRcKeu6KFSvQ09MDWZZRX18PABnOHgC0trbi5JNPxm233YZYLIatW7fikUcewfnnnz/usZ555pnYsWMHnnvuOSSTSTzwwAPo6urK+/fnn38+1q1bl/dfPhGnaRpisRgSiQR0XUcsFjNbiLzvfe/DvHnzcOeddyIWi+H555/Htm3bsGjRIgDABRdcgEceeQQ7d+7EwMAA7rrrLnzuc58r6LmEEGegI0cIsZVvf/vbUBQFsizj2GOPxVe/+lVcfPHFE/Lal156KXRdx9e+9jUcPnwYzc3NOOecc3D22WeP+dxVq1bhF7/4BaLRKGbMmIHbb7/dDOVaue2227BkyRKcccYZqK+vx+WXX46PfOQj4x5rU1MTli1bhqVLl+K6667DBRdcgFNOOWXcrzMWb731Fi699FLz9xNOOAGnnnoqHnzwQQDG57nuuuvw4Q9/GK2trbjjjjvQ1NQEAPj4xz+Ob3zjG7j00ksRjUaxaNEiXHHFFeZrjfZcQogzSLod9e+EEEIIIcR2GFolhBBCCPEoFHKEEEIIIR6FQo4QQgghxKNQyBFCCCGEeBQKOUIIIYQQj0IhRwghhBDiUSZ1H7ne3jA0zb7uK83NtejuHhr7D0lZ4XFxJzwu7oTHxX3wmLgTO4+LLEuYMiX3cniTWshpmm6rkBPvQdwHj4s74XFxJzwu7oPHxJ04cVwYWiWEEEII8SgUcoQQQgghHoVCjhBCCCHEo1DIEUIIIYR4FAo5QgghhBCPQiFHCCGEEOJRKOQIIYQQQjwKhRwhhBBCiEeZ1A2B7eSFNfsR14BzTp3l9FAIIYQQUqHQkbOJLXt7sWZLh9PDIIQQQkgFQyFnE7IkQde5hAohhBBC7INCziYkCeBSeIQQQgixEwo5m5DoyBFCCCHEZijkbEKSQCFHCCGEEFuhkLMJWZKgaU6PghBCCCGVDIWcTUiSBI2OHCGEEEJshELOJmSGVgkhhBBiMxRyNmE4ck6PghBCCCGVDIWcTbDYgRBCCCF2QyFnE2w/QgghhBC7oZCzCZkNgQkhhBBiMxRyNkFHjhBCCCF2QyFnE0YfOQo5QgghhNgHhZxNcK1VQgghhNgNhZxNMLRKCCGEELuhkLMJth8hhBBCiN1QyNmEzIbAhBBCCLEZCjmbkCRAp5IjhBBCiI1QyNkEl+gihBBCiN1QyNmEUbVKJUcIIYQQ+6CQswmZVauEEEIIsRkKOZswqlZZuUoIIYQQ+6CQswlZkgAAlHGEEEIIsQsKOZtI6Tg6coQQQgixDQo5m5CEI0cdRwghhBCboJCzCTpyhBBCCLEbCjmbEDly7CVHCCGEELugkLMJEVrVqOQIIYQQYhMUcjYhm6FVZ8dBCCGEkMqFQs4mzGIHNiAhhBBCiE1QyNmEREeOEEIIITZDIWcTZo4clRwhhBBCbKIsQq63txff/OY3sWjRInz2s5/FZZddhp6eHgDA+vXrcf7552PRokX42te+hu7ubvN5xW5zA8yRI4QQQojdlEXISZKEb3zjG1i5ciWefPJJzJo1C7feeis0TcMPf/hD3HDDDVi5ciUWLFiAW2+9FQCK3uYW0g2BqeQIIYQQYg9lEXKNjY047bTTzN9POukktLW1YePGjQgGg1iwYAEA4OKLL8azzz4LAEVvcwvMkSOEEEKI3fjK/YaapuFPf/oTFi5ciPb2dsyYMcPc1tTUBE3T0NfXV/S2xsbGgsfS3Fw7MR8qBw31VQCAxinVaJlSbdv7kOJoaalzeggkBzwu7oTHxX3wmLgTJ45L2YXcT3/6U1RXV+Nf//Vf8fzzz5f77TPo7h6yrWHv0FDMeI+uIUhJ1Zb3IMXR0lKHzs5Bp4dBsuBxcSc8Lu6Dx8Sd2HlcZFnKaz6VVcjddNNN2Lt3L+6++27IsozW1la0tbWZ23t6eiDLMhobG4ve5hZEaFVzdhiEEEIIqWDK1n7ktttuw8aNG3HnnXciEAgAAD70oQ8hGo1izZo1AICHHnoIixcvLmmbW5BZ7EAIIYQQmymLI7djxw7cc889mD17Ni6++GIAwJFHHok777wTN998M5YsWYJYLIaZM2filltuAQDIslzUNrfAYgdCCCGE2I2kT2LLyM4cudWbO3DPE5uw9JunobW5xpb3IMXB/BJ3wuPiTnhc3AePiTtxKkeOKzvYhJkjN2llMiGEEELshkLOJpgjRwghhBC7oZCzCXOtVVpyhBBCCLEJCjmb4FqrhBBCCLEbCjmbMNdaBZUcIYQQQuyBQs4m2H6EEEIIIXZDIWcTZo4clRwhhBBCbIJCziaYI0cIIYQQu6GQswmJ7UcIIYQQYjMUcjZhNgRm+xFCCCGE2ASFnE2kGwI7PBBCCCGEVCwUcjaRrlqlkiOEEEKIPVDI2YRZterwOAghhBBSuVDI2QTXWiWEEEKI3VDI2QQbAhNCCCHEbijkbILtRwghhBBiNxRyNmG2H6GOI4QQQohNUMjZhJkjRyVHCCGEEJugkLMJOnKEEEIIsRsKOZtg1SohhBBC7IZCzibMqlVnh0EIIYSQCoZCziZYtUoIIYQQu6GQs4l0jhyFHCGEEELsgULOJtI5cg4PhBBCCCEVC4WcTaRXdqCSI4QQQog9UMjZhHDkNM3hgRBCCCGkYqGQswkWOxBCCCHEbijkbILtRwghhBBiNxRyNiEcOVatEkIIIcQuKORsQjaLHZwdByGEEEIqFwo5m2COHCGEEELshkLOJiQ6coQQQgixGQo5m5Bl5sgRQgghxF4o5GxCQiq0qlHIEUIIIcQeKORsIr3WqrPjIIQQQkjlQiFnE+Zaq+wkRwghhBCboJCzCRY7EEIIIcRuKORsgu1HCCGEEGI3FHI2wRw5QgghhNgNhZxNyHTkCCGEEGIzFHI2wRw5QgghhNgNhZxNSJIESQI0xlYJIYQQYhMUcjYiSRLbjxBCCCHENijkbESWGFolhBBCiH2UTcjddNNNWLhwIY477jhs377dfHzhwoVYvHgxLrjgAlxwwQVYtWqVuW39+vU4//zzsWjRInzta19Dd3d3QdvcgiRJXGuVEEIIIbZRsJD72c9+lvPxpUuXFvT8s846C8uXL8fMmTNHbLvjjjuwYsUKrFixAmeccQYAQNM0/PCHP8QNN9yAlStXYsGCBbj11lvH3OYmJEmiI0cIIaSiGRiOY1/HoNPDmLQULOT+8pe/5Hz8iSeeKOj5CxYsQGtra6Fvh40bNyIYDGLBggUAgIsvvhjPPvvsmNvchBFapZIjhBBSuTy7eh9u+9/1Tg9j0uIb6w8eeeQRAICqqubPgv3796OxsbHkQVxzzTXQdR3z58/H1Vdfjfr6erS3t2PGjBnm3zQ1NUHTNPT19Y26bTzjaW6uLXnsoyFJEkKhAFpa6mx9HzJ+eEzcCY+LO+FxcR9uOiaSImMwksDUqbXmqkaTFSeOy5hCbsWKFQCARCJh/gwYImXq1Km46aabShrA8uXL0draing8jqVLl+LGG28sW5i0u3vI1vYgsiwhHI6hs5OWs5toaanjMXEhPC7uhMfFfbjtmAwPx6HrwIG2PoQCY8qKisXO4yLLUl7zacw9/uCDDwIAbr/9dlx11VUTOzLADLcGAgFccskl+M53vmM+3tbWZv5dT08PZFlGY2PjqNvcBKtWCSGEVDrCEInE1Ekt5Jyi4Bw5IeK6u7uxf//+jH/FMjw8jMFBQ73quo5nnnkG8+bNAwB86EMfQjQaxZo1awAADz30EBYvXjzmNjchSRI09pEjhBBSwaSFXNLhkUxOCpbOq1atwv/9v/8XnZ2dGY9LkoQtW7aM+fyf/exneO6559DV1YWvfvWraGxsxN13343LL78cqqpC0zQcc8wxWLJkCQBAlmXcfPPNWLJkCWKxGGbOnIlbbrllzG1uQmbVKiGEkApHtNmKxCnknEDSCyyrPPvss/H1r38dn/vc5xAKheweV1mwO0fuB3e+ihOOaca/LZ5r23uQ8eO2/BJiwOPiTnhc3Ifbjsk9T2zC6s0d+MFFJ+H49zU5PRzHcG2OnGBgYAAXX3zxpK9IGQ9GHzlacoQQQioXhladpeAcuS984Qt49NFH7RxLxSFLgI2GHyGEEOI4ppBjaNURCnbk3nnnHTz44IO49957MXXq1Ixty5cvn/CBVQKyTEeOEEJIZWPmyMVUh0cyOSlYyF144YW48MIL7RxLxSFJEjTN6VEQQggh9iEcuShDq45QsJD73Oc+Z+c4KhJZkqCz/QghhJAKRmXVqqMUnCOn6zr+/Oc/49JLL8VnP/tZAMBbb72FZ555xrbBeR2JDYEJIYRUOLrG0KqTFCzkli1bhkceeQQXXXQR2tvbAQDTp0/HfffdZ9vgvA6rVgkhhFQ6oqgvSkfOEQoWco899hjuvvtunHvuuWYLkiOPPLKklR0qHVlm1SohhJDKRqUj5ygFCzlVVVFTUwMAppALh8Oorq62Z2QVAB250oglVPz2qc0YCMedHgohhJA8pKtW6cg5QcFC7swzz8TPf/5zxOPGRVXXdSxbtgyf/OQnbRuc1+ESXaXR1hXGqxsPYefBfqeHQgghJA86+8g5SsFC7rrrrkNnZyfmz5+PwcFBnHzyyWhra8M111xj5/g8jUxHriRESbudy6gRQggpDeHIsf2IMxTcfqS2thZ33nknurq60NbWhtbWVrS0tNg5Ns8jyRQhpSDyLjSKYUIIcS3MkXOWgh05QSgUwrRp06BpGjo6OtDR0WHHuCoCSZLYRa4EhAhWKYYJIcS1iMb3kXiSUSgHKNiRe+211/Cf//mfaGtryzhQkiRhy5YttgzO6xhrrfKkLhbRZJKuJiGEuBehCXQdiCc0BAOKwyOaXBQs5H784x/ju9/9Ls455xyEQiE7x1QxSCx2KAmdjhwhhLge6xw9HEtSyJWZgoVcLBbD5z//eSgKD1ChsNihNJgjRwgh7kfTdXMlI6MpcNDpIU0qCs6R+8pXvoL77ruPwmQccImu0mDVKiGEuB9N01EdNHwhFjyUn4IduU9/+tP4+te/jnvuuQdTpkzJ2Pa3v/1twgdWCUiSBE3XnB6GZ1EZWiWEENej6TpqqvwIR5PsJecABQu5K664AgsWLMDixYuZI1cgiiwhQQ1SNBqLHQghxPVomo6aGkNOsJdc+SlYyB04cACPP/44ZHncHUsmLRKrVkuCoVVCCHE/mg7UhPwAGFp1goJV2VlnnYU33njDzrFUHFxrtTRY7EAIIe5H03RUh0SOHB25clOwIxePx/Gd73wHCxYsQHNzc8a2m2++ecIHVglwrdXSYENgQghxP5qmpx055siVnYKF3Jw5czBnzhw7x1JxGFWrFCHFwobAhBDifjRdh98nI+CTEWVotewULOQuu+wyO8dRkRhVq06PwruwITAhhLgfTdchSxJCQR8dOQcoWMgBwKuvvoqnn34aPT09uPvuu/Huu+9iaGgIp59+ul3j8zRsCFwaKosdCCHE9WiaDkkGqgIKc+QcoOBihwcffBA/+clPMHv2bLz11lsAgFAohGXLltk2OK8jy2wIXArMkSOEEPejaUa7raqgD9E4Q6vlpmAh9/vf/x6/+93v8K1vfctsQfL+978fu3fvtm1wXodVq6Vh5shxHxJCiCvRdd0MrVYFfRimI1d2ChZy4XAYra2tAAyBAgDJZBJ+v9+ekVUAMnPkSoJ95AghxN2I+2xZkhAKKGwI7AAFC7kPf/jD+M1vfpPx2AMPPIDTTjttwgdVKbBqtTQo5AghxN2IiImUCq2yIXD5KbjY4frrr8e3v/1tPPzwwwiHw1i0aBFqampwzz332Dk+T0NHrjS41iohhLgbcaOtyBKqAj5EWbVadgoWckcccQQeffRRbNiwAW1tbWhtbcUJJ5zAJbtGgTlypcG1VgkhxN2IeVqWJAQCMmIJOnLlpmAhd//99+O8887DiSeeiBNPPNHOMVUMkszQailwiS5CCHE34kZblgBFlqGqnK/LTcF22ptvvomzzjoLX/nKV/Doo49iaGjIznFVBFyiqzTYfoQQQtyNmJ5lWYIiS9DBm+9yU7CQ+/Wvf41Vq1bh3HPPxYoVK/Cxj30Ml19+OZ577jk7x+dp2BC4NDRN/M99SAghbsR05FJCDgBduTIzrgS3+vp6XHjhhXjggQfwzDPPIBwO48orr7RrbJ5HksBihxKgI0cIIe5G1dI5ckLI8ea7vIxriS4AWLNmDZ5++mmsXLkSjY2NuPzyy+0YV0UgyxIt5hJQWexACCGuRkSdMhw5TQOgODiqyUXBQu6mm27Cs88+C0mS8JnPfAa//e1vMW/ePDvH5nkk5siVhJaKrVLHEUKIO9EsjpxsCjlO2uWkYCEXiURwyy23YMGCBXaOp6JgQ+DSSOfIac4OhBBCSE7M9iMyoChGthaFXHkpWMj95Cc/AQC0tbWho6MD06ZNw4wZM+waV0XAqtXSYENgQghxN8yRc56ChVxnZyeuuuoqrF+/Ho2Njejr68OJJ56I2267DdOmTbNzjJ6FjlxpsCEwIYS4m+z2IwCQ5JxdVgquWl2yZAnmzp2LN998E6+88grefPNNzJs3D0uWLLFzfJ7GKHZwehTexXTkKIYJIcSV6Dly5HjzXV4KduTefvttLFu2DH6/HwBQXV2Na6+9FmeccYZtg/M67CNXGmIy4KRACCHuRLX2kdNZ7OAEBTtyDQ0N2LVrV8Zj7733Hurr6yd8UJWCJLH9SCmkhZzDAyGEEJITLVf7EZWTdjkp2JH7xje+ga985Sv44he/iBkzZqCtrQ1/+ctf2BB4FGQJLHYoATFB8O6OEELciSnkJMkoXQWX6Co3BTtyX/rSl3D77bejt7cXL730Enp7e/Hf//3fuOiii8Z87k033YSFCxfiuOOOw/bt283Hd+/ejYsuugiLFi3CRRddhD179pS8zU1IDK2WhBBwnBQIIcSdpJfoQrqPHJfoKisFCTlVVXH22Wdj/vz5WLp0Ke69914sXboUp59+ekFvctZZZ2H58uWYOXNmxuNLlizBJZdcgpUrV+KSSy7BDTfcUPI2NyHRkSsJLtFFCCHuxtoQWFGYI+cEBQk5RVGgKApisVhRb7JgwQK0trZmPNbd3Y3NmzfjvPPOAwCcd9552Lx5M3p6eore5jZk5siVRDpHjvkWhBDiRoRmU2QJikQh5wQF58hdeuml+P73v49///d/x/Tp0yGlDhgAzJo1a9xv3N7ejmnTpkFRjPXYFEXBEUccgfb2dui6XtS2pqamcY2hubl23OMeD2KJrpaWOlvfp1KRU13CJUma8H3IY+JOJvNxSSQ1SBLgUwrOeCkbk/m4uBW3HJODvREAwJQpNRCyoK4u5JrxlRsnPnfBQu6nP/0pAODVV1/NeFySJGzZsmViR1UmuruHbG1tIfIFDh8eyBC+pDBi8SQA4wLX2Tk4Ya/b0lI3oa83Eei6jidf3YMzT5qBhtqg08NxBDcel3Jy52PvIuhX8I3zPuj0UDKY7MfFjbjpmPT0DgMABgYippDr6Q27ZnzlxM7jIstSXvOpYCG3devWCRsQALS2tqKjowOqqkJRFKiqisOHD6O1tRW6rhe1zW2kdBx0HaCOGz+TaYmu7v4oHn9lNxrrgvj4iVz6bjLS3R9F0K84PQxCxoUwQxRZMipXMTnmbDcxbg+/o6MDGzZsQEdHR0lv3NzcjHnz5uGpp54CADz11FOYN28empqait7mNoQLxzy54phMDYHFkjbsvzR5UTWdF0DiOaztR1i16gwFO3JtbW245pprsH79ejQ0NKC/vx8nnXQSbrnllhHVqNn87Gc/w3PPPYeuri589atfRWNjI55++mn85Cc/wY9+9CP8+te/Rn19PW666SbzOcVucxOS6cjxpC6GydRHTgg4rlE4eTGEHIU88RbiRluSYDYEpnlRXgoWcv/xH/+B448/Hvfddx+qq6sRDoexbNky/OhHP8KDDz446nOvv/56XH/99SMeP+aYY/Dwww/nfE6x29yEbDpyDg/Eo2iTqI+cGUbmneykRVU1qDJzMIi3yKhapSPnCAWHVjdt2oRrr70W1dXVAICamhpcc8012Lhxo22D8zoitEpHrjjUyRRaVYX7SEfGCQaG43jy1d2O3jQwtEq8SLohsEXI8TwuKwULuZNOOgkbNmzIeGzjxo04+eSTJ3xQlUJqtRI2BS6SyZQjJwQc72SdYcPObjy2aje6+iKOjSGpagytE89hbQhs5sjxhrSsFBxanTVrFr71rW/hE5/4BKZPn45Dhw7h73//O8477zwsW7bM/DuuvZqGjlxpWHPkdF2v6BYuQsDxQu4MydSFJ+mgkFY1ncUuxHOIeVqSJSipHoiT4ebbTRQs5OLxOD796U8DAHp6ehAIBPCpT30KsVgMhw4dsm2AXoY5cqVhtecrvYWLyqpVRzGFtIP7X1V1qAonC+ItzPYjUjq0yhvS8lKwkPv5z39u5zgqEplVqyVhvatTNd207SsRM7TKCdARhIB2cv8zR454EbP9iKWPHB258lKwkAOASCSCvXv3Ynh4OOPxU045ZUIHVSlIMh25UrBOBpVeuSocIebIOYPZx89RIadBVSv3ZoVUJukcOUBRWOzgBAULuccffxw33ngj/H4/QqGQ+bgkSXj55ZftGJvnYY5caVgng0q/w0sLCYZWncB05BwKreq6ztAq8SRiambVqnMULORuueUW/OpXv8JHP/pRO8dTUViX6CLjR9N1+BQJSbXyQ05sCOwsSYeLTTRdhw5eAIn3sB6OkSQAACAASURBVLYfSa/swBvSclJw+xG/349TTz3VzrFUHHTkSkPTdPh9k6MKisUOzuL0/mdonXgV1QytGjlyklT5qTBuo2Ahd+WVV+IXv/gFenp67BxPRSEcOZ7UxaFqOvypcvZKdypUF+RoTWac7uNnNr/Wdd74EU+hW4odAECRZd6QlJmCQ6uzZ8/GHXfcgT/+8Y/mY6K315YtW2wZnNcRJzbn5fGj6zp0HfClHLlKv7ilc7Qq+3O6lfTKGs4KOfGzT2HRA/EGZtWqJIScxBvSMlOwkLv22mtxwQUX4JxzzskodiD5YWi1eMTk4JskjpwbqiYnM2K/O9VHzhrSVVUdPsWRYRAybszQaiq+RyFXfgoWcn19fbjyyisrurv+RCOxIXDRiJy4SZMj54KGtJOZpMN95Kzvm9Q0BEElR7yBdYkuwIhEVfp87TYKzpH7/Oc/jxUrVtg5loqDDYGLR1zYJk+OnPMNaSczqsOh1WSWI0eIV9BSq+4I40JR6MiVm4IduQ0bNuAPf/gD7rrrLkydOjVj2/Llyyd8YJUAHbniEXd0vknmyLFq1RlUc61Vh0KrWTlyhHgFXddNNw4QoVXOY+WkYCH3pS99CV/60pfsHEvFITNHrmiyHblKr/xljpyzON3+w/q+FPPES2Qvn6gwtFp2xhRyr7/+OgBg+vTptg+m0pDYELhoxDwgcuQqXeCYjlCFf063YhY7OOQk0JEjXkXLEnKyLPMcLjNjCrkf//jHo26XJAl/+9vfJmxAlQSrVotHy3bkKnxiYGjVWZIOt3+xCkiKeeIltKzQqo9Vq2VnTCH34osvlmMcFYnCPnJFIxyqyePIMbTqJE7vf4ZWiVfRNB0WQw6yLLFgp8wUXLVKxo/ElR2Kxix2mCyOnOZsjtZkRzhyrugjV+HnOqksNB0jc+R4zSsrFHI2kq5a5Uk9XsxiB+HIVfg+TPcxoxvjBI47csyRIx4lO0dOkSW6ymWGQs5G0lWrDg/Eg4hrmXDk9Aq/uKUbAlf253QrTjty1rw4XgSJlzBCq9ntRziPlRMKORuR2BC4aLJXdqj0iYENgZ3FcUdOpSNHvEl2sYNMIVd2KORshI5c8Uy6JbpMIUE3xgkc7yOnMUeOeBNN183CPgBQFLni52u3QSFnI1Jq79KRGz+qWewgZfxeqTgtJCY76fYjLugjx3OAeAhN0yFl5cixhU55oZCzES7RVTyiQMTvUzJ+r1ScDu1Ndpze/xlrrdKVJR4iu/0IV3YoPxRyNsIluoon3RBYyvi9UrHmyPF8KT/CiXPKSWDVKvEqmo6M0Cpz5MoPhZyNMEeueMzQ6ijFDpqu4zdPbsJ7bQNlHZsdJJns7ijpPn5O9ZFjaJV4E1atOg+FnI2IHLlKDwvaQSFLdEViSbyxqQNb9vaUdWx2kJHszgt52RFC2g195JxqgUJIMWj6yBw5ttApLxRyNsLQavEU0hA4kdQy/vcyme0nvP95vIbY546t7MCqVeJRRjpyMs2LMkMhZyPpJbqcHYcXSRc75HfkhICrhCa6GY4MT5iyout62pFz6FxiaJ14lez2I1xrtfxQyNmIREeuaApZa9XpbvwTCdtPOIfVPXCuIbCW82dC3E6uqlXejJQXCjkbYbFD8RTSELiyQqu8kDuFVTg7F1p1XkwSUgw511rlOVxWKORsRJzcdOTGj5rlyI2aI1cBwkfVdDMUz0mwvBQS1gxHE7jtz+vROxizZQzW48/Quje4e8VG/PGF7U4Pw3E0HRlCTmYfubJDIWcj6bVWnR2HFxlfjpz3hVxS1RD0G82PeSEvL4UUGhw4PISN7/Vg76FBe8ag6gikjj8dWW9wsCuMg51hp4fhONlrrSqKxIKtMkMhZyOyubIDL8zjJTu0musCKwRcRYRWNV7InSJZQGhVuL7xpGrLGFRNg0+WIEsMS3mFRFKriLmnVFSGVh2HQs5G0lWrPKnHixlalQtw5CpgMlVVHYFRRCuxj0J6+Nmdj5lUdfgUOeVm8Ph7gURSQzxhj7D3EnqO9iO6zuteOaGQsxEWOxSPEG6KYrgUuSaFRIVVrQYDwpHjCVNOxP72+2Qk84SE7BZyqqZBUaRUM1Uefy+QSGqIVcBNZKlouj4iRw6o/GUV3QSFnI2w/UjxCFdClqS8a/dVVNWqpiHgU8yfSfkQOYlBvzKmIxe3TcgZvbiMsBSPvxcwQqt05NSs9iO+1C90lssHhZyNsNiheIQDJ6cubjlDqyJHzuMOhq7rUFUdQb/xdWSxQ3kROYkBv5z34mOea3blyKk6FFmGouQfA3EXRmiVojtX1SrAyEI5oZCzERY7FI8ZWpXHduS8HlrVdB06wGIHh1AzHLk8odWE3aFVnaFVD6FqGjRdZ44cUjlyWcUOAK975YRCzkbSfeQcHogHEUJOliXIUp6VHSoktCou3EE/c+ScQOzvgF8Z05GzLbSqagyteghrqH2yp86oI4odhCPH87hcUMjZSDq0Orm/6MWgWhw5RZZyrldbKY6c+KwBP6tWnUCcP8GUkMv1fTXzMW0KpSU1UbXK0KoXsN48ev1GslRG9pHjPFZuKORshFWrxWPmyKWKHbQcLkU6R87bE2layImGwN7+PF7DGlq1/m4l7cDYlSNnOHI+haFVL2AVb3a5tF5hRNWqxGKHcuNzegAAsHDhQgQCAQSDQQDANddcgzPOOAPr16/HDTfcgFgshpkzZ+KWW25Bc3MzAIy6zS1IzJErGhFKlaT8DSYrpY+cCEEEfQytOkHSUuwAGPs/dShM7F4OLrNqlcff7VjPg3hCBar8Do7GWUbkyClsP1JuXOPI3XHHHVixYgVWrFiBM844A5qm4Yc//CFuuOEGrFy5EgsWLMCtt94KAKNucxOsWi0ecWGTTEdutKpVjws5hlYdJduRy+WIimpVu0KrRrGDDEWWPZ8qMBmgI5cmu/2IyJFj9X35cI2Qy2bjxo0IBoNYsGABAODiiy/Gs88+O+Y2NyGzeqdoNMtdniznzhtKO3Le3r/J7NAeL+RlZURoNYcjansfOTXlyHFlB0+QIeQmeeVqdvsRs2qV53HZcEVoFTDCqbquY/78+bj66qvR3t6OGTNmmNubmpqgaRr6+vpG3dbY2FjwezY3107oZ8hmKJIAANTUBNHSUmfre1UawZAfiiyhpaUOAb8Mf8A3Yh/6UvEvTdfR1FRjJtkWgpuORzR1TWieUg0ACFUHXDW+cuLE567e1wcAaGyoMv6fUo3m1M8COXWuSalzcqKRZAk11QHoMFYxcdvxd9t4nObwYNz8uaY25Mj+cc8x0VFrucZN6RgCANTXV7lojOXDic/sCiG3fPlytLa2Ih6PY+nSpbjxxhvxqU99yvb37e4esvWuoaYuBAAYHIyis3PQtvepRIbCMUiShM7OQeiajkgkMWIfDoZj5s/thwbMJa7GoqWlzlXHo6vLmPhiMUP49/dPzvPFqePS2xcBAKiJJACg4/AgtHgy42+GUudaeDhuyxijsSSSCRWaqiIaV111/N32fXEDnd1D5s8dnYNorilvjpybjklS1RGNpufnoSHju9LVM4SGUGFzcqVg53GRZSmv+eSK0GpraysAIBAI4JJLLsHatWvR2tqKtrY28296enogyzIaGxtH3eYmWLVaPFoqRw5A/hw5awsAD4cjR1ZNevezeBGRE1dY1arNa60qMnOLPEBmaHVyf1+1vH3keB6XC8eF3PDwMAYHDQWr6zqeeeYZzJs3Dx/60IcQjUaxZs0aAMBDDz2ExYsXA8Co29yEZDYE5gk9Xqw5cmNVrWb/7DWEkAiwIbAjWBsCG7/nKnYow8oOMld2cDNPv74H67Z3AsislLerJY1XyG4/why58uN4aLW7uxuXX345VFWFpmk45phjsGTJEsiyjJtvvhlLlizJaDECYNRtbkKc224odtB0Hf9532pc8LH34dR505wezpioIxy5kRdQa3Wflyv9xIXbr8iQwGqvcqOqmY5cMoeQiptCzua1Vrmyg2t54e0DOG5WI07+QEuGeJvMjpyu69B1ZFStmmutch4rG44LuVmzZuHxxx/Pue2UU07Bk08+Oe5tbiHdR87hgQCIxVW0dw9j/+EhTwg5o1u48bMsje3IeVrIWVexUHghLzdmaDvgZGhVN0OrvAC6k3hCNUVbgo4cAEvj9hx95Hgelw/HQ6uVjCS5J7QajRuTTSzujUknO7Sac4kuVYOYPrwcWhXCzbyQM7RWVtLtX4zpMNdNgdmz0KbzLKlq8AlHjsffdei6jlhcQyzVaoQ5cgbinlPJEVrlDWn5oJCzEdlFDYHFBBT1SM8jVdMhy8bpOVqxQyhomMpeLnYQoTyfIsPnwgt5W1cY//PMloqdmEVoNeDL78glkyMv4BM6BuHIMbTqOE++uht7Dg1kPJZUNWi6nhZyKh05IHMpRYGSmreZI1c+KORsxE2OnHDivNK8MrtqNefFVdVQnRJyXl6mSwi39BJN7vosG3f34JUN7egdjI39xx7ELDRQ8jsJQsCpmm7L8Uk3BGZo1UkSSQ2PrdqN1zYeyng8lnLdsh05SZrsjpxYStGy1ipz5MoOhZzNyFLusGC5iab6YkU9ElpVLSXtipTfkauqAEfODK3K7mw/EUudO5GYN86d8ZJUjdYfPkWEVvMvBwdM/IVb13VoOqtW3YBo4t4/FM94PJaVmpJIalBkCaGA4mlH7uV1B7G7fWDsP8yDcOSsoVUfhVzZoZCzGUlyiSOX8JYjp+tAyqHP68gZQi5VaejhZbrMYgfFnTlSIhwfiSXH+EtvYq0YFb9nE09oZjHERN80jDj+vAA6xuCwIeD6hjLd5+z5M5HU4PPJCPgUzzpyW/f24oGV2/DS2oNFv4a4wbYWO8ijfI+IPVDI2YwkSa7IkRNOnJccOWtoNZcYTiQtoVUvO3LW0Koiuy60KlyIShVySU2HL1VoAowMrWqaDlXTURtKub8TfOFWzRxJVi07zeBwHkcuJeBEiDWhavArMgJ+2ZOOXFLVsPz57QDS0ZpiMIWcpf2I2UfODRe+SQKFnM3IkjtOaDM04BFHTtO0URsCa7pxca0SF1cv58hZQqtuLHYwhVwJE76bUVXN3PfG75n7Xzhw1SFjGaaJvnAnzeMvQ5Erq2p576FBtHeHnR5GwVgdOevNY9ziyOm6jkRSg98nI+BXJlzYl4MX3z6Ag11hBHxySTf3YlqW5BxVqx6+ufYaFHI2Yzhyzk/M0YS3hJyq6VCk/Et0ieKGysiRc3doLR1a9ca5M14M9zcdWk1mOWLiJqHGppsGqyPrUyToqJyKv/uf3YqHX9rl9DAKRjhy8aSWcb6LeVNPbUsKIeeTEfOgI/fX1ftw/OwpOGZmQ0mdDMR5qrDYwVEo5GzGyJEr/O/7hmJ4ae2BCR9HdrKu28noI5ejIbDpklRAaDWZEVqVRggJpxHnTLRSQ6uqlhlazXbksm8aJlrImUJeqrgeXOFIAuFowulhFMxgJB1S7Q+n8+RiFtctllDTjpwHc+R0XcdAOI73z2hAKKAgWsINmpqrIbAsUhQo5MoFhZzNSJI0rtDq6s0dePC57egfmthWD2YfOa8IOT09OUjyyH1o98W1nIiLts+lDYHFHftwhQo5o4ebnLcjvViWq8YMrU60I2epWpbzV856kWhc9ZSTKxw5AOiztNux5pHF46olR06xbdk2u4glVOgw5s5QQCkpR043c+S41qqTUMjZjDxOR244anypJvqiKQScqum2u1drt3firsc3lvQaavbKDmOEVr3syKVDa6IhsLs+S9qR89YFq1BUVYcvI0cud2i12q7Qaurc9sn5xaRXicSSJQmFcjM0nDDb0PSF0+6c1XWLZjhysuccOSGsQ0EFwYCvxBw5LtHlBijkbGa8OXKiMlAIuonCGlK1O09u854evLX1cEniSsvKkcsXWq0SLSE87chlhtbcNgGKc6dSHbmkppnLoxm/5z7X0jlyE13skD7++cSkF0kkNaia7qlq58HhOGZOrQGQ2YLEOmdmhFb9smfyjgVCWFcFDEdutPH/8fnt+NMLO/JuH7X9iMvmsUqGQs5mxuvIiUlvoic/a0Kr3XlyEyFG1awcuWxHTgg3v0+BT5E8XuygQZKM8IQbGwJHzbC8dy7I40FVdbPQxPg9nyNnd2hVtrRAcdc5UAwRSxNyNxR8FcJgJIGWxhCCfiWjBYl1zozHVcSTKgI+JRVa9dbcIxy4UEBBKKCkBHfuz7D9QB+2H+jL+1qqGVpNPyZLEiSpMs5hr0AhZzPjdeSE6zHR7kc8UT5HTgi4UpKcdT0t5GRZMpNqBWkhJ8OnyN5uCJxqSAvAlQ2BK72PnKpq8MnpQoPs/DTbq1ZzFDu4TcwXQznTOSaKweEE6qoDaKgNjOLIaWZDYL/Pe33kxPfYyJEzzul84dVILInIKDfkYlq25sgBomWUN455JUAhZzPSOPvI2RVatX5R7S54ECI0HCnNkbM2BB6RI6emhZzfJ3vmQpGLpKqbeSVuawhsXSjcS0nr4yGZOtckKXdoO9uRs6v9iFVMVkJo1Vrl7IVzR9N0hCMJ1FX70VgTQN+QNUcuR2hVkRH0e69q1cyRSzlyQP781+FoclRTIVeOHGC4yyx2KB8UcjYjjXOt1YhN7kcsriLgl82f7UR88YdKcOS0MYodTEdOMRw5r4Q3BobjuPmPazMWoFc1zcyNcltDYOsFrHIdOd0MaeZaWSG72GGiHZjstXaNx9xzDhSL9YbRC82khyIJ6ADqqgNorAtmdA6IJVRTZMcSqarVVLGDlxxHwJIjl6paBZCzl5yu64jEVAxHk3mjSmqOHDnxu5vmsUqHQs5m5HGutRqxKbQaTahoqAkAKGNoNVK8kMtoCDxqjpwMv+IdR27foUFs3deH99rSC1WL9heA+0ISQvTLklSxxQ6qppkXaZ8sjx1anWAHJl3sMPp6r17DKvy9UPE8mJqvaqv8aKgJZlStRuMqaqsNRzaWUM2GwH6f94qtMkOrKSGXQ2jHEio0XYem63mjOOklunKEVj2SF1kJUMjZzHjXWrVLyMXiSdSXS8iJ0GoJ4WFNz3TkdGSGqEVxg8hT8Uqxg3Aphy1upZEjlw6tuik/StypN9QGKrbYIanqZssJw5HLXbUa9CtQ5IkvrDHbz2Q0BHbPOVAsGY6cB24ChlLLc9VV+9FYF0AsrprjjidU1FcHzJ9F1WowFeWIe6hyNbPYIX+OnDUcnu/46XlDq3TkygmFnM3IRbYfGS3BtBhiCRUNNUHjZxtDq6qmma8/VIIjZw2tiv+trlx2sYNX7ohF3qBV5FodIUVx1wQojmVjbQBJVfdc89NCUFPtRwDAp8h5q1ZFPuZE50SJ98vsI+eN83k0rOFUL4RWRTPguuoAGlNzZX/KlYslNNRW+SEhM0cu4DccrYmuZLaTSDwJXyolZbQcOauZkC9nO1fVKmDMY8yRKx8UcjYzniW6EknVDOtMpCOnp6xx4ciVsrbeWFjv4kqpWs1uCCweEwhXxK/I8Pkkz4RWxT6x7puJDq2+uaUD63d2lfQagqgp5IwLmxeS1sdLUtXhs1QN5wutipyoiRazZtWqZWUHN4n5YrGKA0+EVi2OXEOtMVeK1R1iCRVBv4JAQEE4moSOtLAHPObIxVRUBQ0BFxwltGo1E/LN5fmKHWTJXSkilQ6FnM2MZ4muYctkN5FCLpHUoOtI58jZ6MhZx11Kjpy1IbAkje7I+RXZXOnB7ZiOXCQztOqz5GiVehF/6rU9+Osbe4t+fiKporMvAiAdhm+sE0LO/c7KeDGEdPqmYWSxg7EPRE7URLu/Ym3dygutetORq63ymzcufWGLkAsoCPoV87trNAT2piNXlQqpitBqrnSbDEcuz/defFVGhFYVuSLOYa9AIWcz43HkMsr1JzC0Khy4mpAPiizZmiOXeRdXQvsRfaQjZxXESWvVqpdy5FIXgSHLvkmqWaHVEifAgeGEGRIqhhfXHsQNv30TSTUdJp8iHDkPXJDHi2rZ/74ca93Gk5rRrFk2OvlPfENgiyNXSaHVWLrS0ws3AIPDCVQHffApMhpTjpxoCmw4ckZO3JBFyAU96siFUo5cutghV47c2KFV05HL2UeOQq5cUMjZjDwuR874stRW+SfUkYuZya0+BPyKvY5cyoK33rkWg54jRy5XaNXnk+BXZCQ80hA4nKvYYYQjpBfdCV/TdQyVKOS6+qKIJVQMDidGhlYnOHfTDSQ1S7FDnj5y/lRSu9+GfExzrVVFNkO8FRFajSdRV+2HIku2966cCAYjcbMytSroQ8Anm02B4wkVAb/hyJlCTpHh96AjF40nTScu4JMhSblDq4XkyI1WtcocufJBIWcz43HkxB1Qc33IJiFnNIC0M0dOjHtqY6ikYgdrQ2AlT7GDcEl8Hmo/YubIWZolG59VVE2W1kdsOJo0mvjG1aKrTAdSuUKDw3HzNaaI0KoHLsjjJbNqWDJDnYKEaiS2A4Dfn9nJPxpP4sf3voFt+3pLeH9LH7kKCq1G4qrZ4sIbOXJGM2DASOeorwlgIByHruuIxTUE/QqCAUto1W/kTALecuQiMdVco1qSJIQCvpzHJ1JQaHWUPnIVcA57BQo5mxnPEl2mkGsIIRZXJyy8IoSbyPGwc9IRX/iWhqrS2o9ounmXl69qVSQa+31erFq15shp8ImqyRL7iImEbQBFu3LiNQaG4+PKkdN1HZt294xrJRM3oKqapSHwyNCq9VwLZOXIdfZF0d49jF2WvoDjJaOPnCKW6PLG+TwahvOjoCro80RIfnA4gbqqgPl7Q00A/eE4kqoGTdcNIedXMJT6DvsV43fAW45cJJ5EVdBn/p7v5n44moQiSwgGlLFDq1lCzkchV1Yo5GxmPI6cEEFN9RNbISgcOTER2RnmEKG3lsYqRGLJosSopuvQkXbihKDLDq2Ki2s5HbkDnUPYvKen6OfnrVodUaFb3OcZsIg366Lf43qNVNL3YNgIrcqShPqUUzGakNt5sB///b/r8e6u7qLe1wk0zTjXMlfWyNz3yaybButFW3T/HyghlJ2ZI1dBodWYilDAWM/TEzlykbjpyAEwHblYqt2MuBEWNzdjVa3uPTSIDbsmpnp8IonGkghlC7k8OXJVQR9qQr6MVBAr6dBq5uNyju8RsQ8KOZsZT46cEG5T60MActvZ+zoG8e3/fhmHe4cLHoO1AWQwYHOOXCwJCcDUhtRnKMKVy7br8xU7mBfXCRBy/UMx9AxE824fjibw+2e3Ysn/vInb//xOUUJL13WEI0lIknGsxZiT1vYjqf+LbQosKu+A4h05IUoGhuOIxY1qPXEHP9oFuaPHqHQ90DlU1Ps6gTgGmWvdjix2yOf+in1cSk6idYkuXwWFVoUjFwrae/NYKrquY9u+XgwNJ1BXPdKRy74RFmRUreboLfjEq7tx/1+32jz68RONq2aRAyCEXI72I7EkqkM+VAd9+UOrozQE9nqOXH84jn0dg04PoyAo5GxmPCs7RE1HzhBBuRLLdxzoRzyhYf/hcMFjiCWM1wmmcuTsrFodjhp3eyJpuJjwaraQyxlatYQjJyK0ev9ft+KuFRvzbn/qtb34xzttOGpaHVRNx0B4/Pl/0bix5E1zllDPztESjxVDRmjVslZkoaiaZuYADQzHEU0Yk75PMdyH0XLkuvoNIXewq/Bz02nSPdxG7yMncuSy+8gJAVeSI2eGVi05chXgZkRiRo5clYsdOVXT8PPla3HTH9ch4Fcwb/YUc1t9TQBDwwlz7EG/Ygo3AJkrO+ToLdgzEEP/UNxV+btJVUM8qZk5coBRBJdLaA+nHLnqkH+UYgfjf6UCq1bve3ITbn/4HaeHURAUcjYznrVWh2NJBP0Kaqv85u/ZHOoxnLi+cVykRWgglJqIbBVysSSqgz7UhIzPUEzBg5pVCZWzIXBSM9c59PlKXzaprTuMtq5w3mPV3h3GzKm1uOBj7wMA9Azmd+/yIfbFEVOqAKTdyoyVHUq8kAtHTpakolyioYjR7BQwQquxuGq6EFXB0S/IXf3GPmnr9I6QG+HI5ekjF8gTWhXfw5IcOVWHBOOYlerIuol0jpzi2iKZ7v4odh7ox6JTZ+G2yz6K42c3mdsaagLQAXSlnPoRjlxqdQQJuR25nsEodGBUp7/cmNGZrNBqriiNmMurg768N+R5HTkP9pH729sHcP9ft0LXdRw4PIRNe3rRPxS3NYI1UVDI2cx4ix1CwdHDWELI9Q6OQ8ilTsSAX0GogBy5u1dsxGP/eK/g17cyHDXu4oQYLaYFiZgclNEcOYtL4lNk6HrxeWWqpqG7P4ZITM0rPLsHYpjaEEJTKum/bxz7XyDy4o6YUm38nnovVU2HVs32E0VOggPDcdSEfGioDRSVI2d19AZTxQ6i+3tVQClIyLX3DHsmrGJt/SH+H+HIqdbQqoKE5aI9MAGOXDK1RJgkWR05b+y/fIjVZIyqVV9Gj0w30T1gfI9PeH9zhkgDYK6E09lrOM1Bv4xgIH3J9PtkSJI0opIZMMS/uKkS7+EGxHEQDYEBI1KTb2WH6qAP1SEfIrHRc+SkLEfOWNnBW+fwm1s68I932vDKu+14bs1+8/FuFwnxfFDI2YwkAYWez5FY+osD5M4vO9Q9fiEnvqTijnK0qlVd17FhVzc27y0uoX84lVdRU2V8hmKW6VKzQ6s5ih2SambeEoCiw6s9AzFTPB5OrWiQTfdAFE31QbN6s6cYIZeqdjui0XDkxL7JXmsVKN6RGUjl+dSn8nvGy2DqOQG/jIFUH7lQhiOX/9zp7o/Ap0hIJDV09ufej27DWmgAiIbMI9daFe5vIKv5tBDLQ5FE0SE0I7SeDu0C3m8InFQ1qJpuceTcKeSEW9aUyum1ItamFnNCIDAyRw4wKpmzq1Z7LTdR3f3uEQLCGRVLdAGFhFbHzpFTRjhy3suRt1dqXwAAIABJREFUEzeiD/1tB97YdAhHHVELwF2Oaj4o5GxmvI5clUXIZbsf8YRqnlS94wjtxRIqAn4ZcqqUfDRHbihiXLy7+ka+fjyhjhmWjWSFVq390gpF1wpz5HyWhc4BjHBSCqXTIt46cwi54WgSkVgSzQ0h1FX54VOkkhy5aanQqtg3yYyqVVG1WNyFfGjYqLxrrAkUlSMnKlZnNNcYjlzc4siNElpNqhp6BmM47igjx6jNI3lyos2HL6shs5VEUoMvq9hBfKetYtlaaDIeMqqWlZE3LV5ECP5QwMiRiyc0V17YhdvSVDdSyNXXGHOYmBNCOXLkAOOmJ/vmuNdy8XeToyNu6kOBwqpWRbFDJKbmPH75+sgpsuR4esADK7fh9Y2HCvrbRFJD32AMpx8/HaqqQ1V1fGnhsQDSoXU3QyFnM/I4HLlhS3Kw8XvmRbOjN2K0SlDkjDu+sYhZXJVgQIGq6XndA3H32R+Oj5ic7ntqM/7fX94d/TNE019+CRPjyOWqWrW6JCLEWqwjlyHkekcKOSGem+tDkCQJjbVB9BYhksJZOXJD0XRo1VxZoMQL+cBwAvXVASO0WoQjJ5oBz2ypySh2ADBqP7CewRh0Hfin9zcD8I6QSztyo4RWLWH8bPe3Pxwzi1f6w4WdEy+tPYAte9MNhK0re5jus8dDq0IwVAUVMx+r2AbVdtLdH0VDTcA8rlbM0GqfCK2O4shl5cgJx16Cyxw5IbCDmVWriaSW4QJrWjo0Xh3Kn7M9WvsRzUFXOalq+Md6I0xaCN0DRj7jB2dPwb+ffzwuOmsOjjuqEbIk0ZEj43PkoqlGjbIsIZSjCaPIjzt2Zj36BmOFv64lz0lMRPmcNauQ6bJMQLquY+u+PuxuGxj1fUWCrCxLqA75inLkspd9yVe1OiK0WqSL1dkXhSIbndxzhVa7LEIOAJrqgugtIu9lyNJjD7AWO6QdmYloCFxX7Ud9TRADw/FxuyCDw3HIkoTpTdWIJzQMhOOWYof8OXLdqf0264haTKkLekbIJS2rKoj/s93QRFJDwJ++aANGS5JYQkUkpmJWKgRTSE6ipuv43xd34qnX9piPGQ2hjdcXeXLjEfKarhfV5kfVNLy+8ZAtYdxMR07JeMxJBsJxXHnHKlNI9wxEzS4B2YQCPgT9CjpT0YkRoVUl7chl30SKi//MlhpXOnJVGY6c8bM1qT9iCnGjjxyQKeQe/fsuLHv4HdOkyM6Rc7pqtbMvAk3XsefQYEHtv7pS81dLYxVO/kALPv3hWVBkGVPqgq4S4vmgkLMZeZwNgauD+cNYh7qNi+Pco6aYF5FCMCoPjS+jcFfyVeJYhUyXJc+peyCKoUgCw7Fk3sRuTdcRTdnxAFAT8puu03hQs/IurFWr+zoGoWpaRh85M7RagiM3tSGE6VOqRnfkUnk0jXXjc+REPmM4kkAwYIRnqoK+dLFDKtkdsC7RVUQjZU03e2E11ASg65nFC4UwEE6gttpvuhGRWHpdRqONRO7zRoj+qQ0hzJxaM2YLks6+iCvaMuQqdsi51qpliS7xmPgeHDXNEHKFFDx09UUQT2p4r23APMZWIQ/kztMbjUdf3oUf3fP6uM+ZNVs7ce9Tm7HBhgbOacFgKd5ygSO3fX8fBocTZlPv7oEYmlMN2HPRUBMwz9NsR06E24N+ZUTD3N7BGKqDPsyY6i4hJ64p2Ss7AMgIr4obA1G1ajxmfMakquHldQexcXePWeQxIkfOYSHXnsolj8SSOef0bDot85eV5noKOYLicuQA5EwwPdQzjKb6IKY1GVWPhebJWRtAjunI9UVMgdRpyZPb055ujCi+JCPeJ2a0rhCfoabKX1zVap5ih237+vCT372Ff6xvy+gjl86RK04YHO6LoKWxCi1TqnI6ct39UfgUyRQ3U+qC6C3QEd20uwfX3PkqDhweQjiSQK0pcn3pYoccye7F5JcMRRPQASNHrtYY63jDq4PDcdRX+1FvaYxqzZGLxpI573C7+qOQJGPfzJhag/bu/JWrg8Nx/Pje1Vj55r5xjc0OzNCqJUdu1KpVM4yvmg6c6cjl2dfrd3aN6LEXS6g4kOoFmVS1TCEnj1wmLB89A1E8v+YAhiIJsxCqUN5JrTqw99DENz2NWNpcmELBBY7ce+3GUmoHO41WQ6M5ckA6vCrBKHQRfeN8imzOS0ceUYu9HUMZQrpnIIam+iCa60PoGYi6Ztm6tFOaGVoFMtdRtgq+7OK7je/1IBxNQtV003kfmSMnO5oTKaJXALD70NjL53X1GYVaophN0NwQclXVcT4o5GxGkqSCcuSSqoZ4QjNFUC5Hrr17GK1N1eYC5oW6QrEcodV8BQ+dvREcPb0Ofp+c4cjttXS4bu/JfcGw3sUBQE2Vr6gcuXwNgV942ygJX7ejKzNHrsSq1S4h5Bqr0Dc0MjeweyCKprqQOXFPqQshkdQKana8eW8PdABb9vUiHE2aRSA1VX6Eo0noup472b2I0KqoOK2vCZgVd+MVcgPDcbPqVWDNkdOR283t6o+gqS4InyJjxtQaJJJaxvljZdPuHiRVDe/sdH4pL3Hx9Vn2v/WCrOt65lqrlrU1RU5cS2MVqoJKTkeuPxzHrx7dgCde2QPAEBCC7Qf6UmNIt58BxudmPPX6XvMGZt/h3CtqbNnbix/++rWMXB9N082l1PZ1jL4SR1LV8D/PbMHrmwpLHAfSbS6MlR3c48jtTq2Ju//wEIYiCcSTmpkykYuG1Pcg4FcgSRICgcw5BwCOm9WIWELF3kPp/dg7GMOUuhCa6kNIqnpJ7WkmErODQQ4hZ81hFNee6pAlRy41363e0mHOhftT55yc3X5EljLmsEgsaYvzm49D3cOorfLD75MzTIh8dPZH0VwfGvE5mupD6B2Mub6KnELOZoz2I2NPykJYidyF6qAvI+9F13Uc6hnG9KYa866h0BYk2cUO4rFcHO6LYFpjFaY2hDIqV/e0D+CoI2oR9CtozxM2G7Z8+QGgNuQvKkcuX0PgSExFXbUfW/f1IhJT0+Eu0bJD1bBhV/e4WrOEowmEo0m0NFaZbUGyK1e7B6JmWBWAKaQLqVzddXAg9X8/hqIJ1KT669WGjNCqtas/YOkjl8ddTCQ1/PnFnRnJ8gJRNVlX5Ud9ypEbT+No8Rr1NYGMNSetOXJA7rY4Xf1RTG0w9t/MqTUAkHf1kXffM8Ja77UN5F3DsVxYF6wHjP2v6+mbCeHO5Wp1I0RyQ00A9dW5i0vWbe+ErgM7DvYDMIpAmutDaKoPYucB4zHryh7GWAoLrXb2RbDqnTZ8/MRW+BTJvKhm88qGdnQPRPHcW+neWLva+lM3Fr6Mm7RcbNvbi1c2tOPeJzfjd89sGbV9kSC9LKA1R85ZIadpRs6UT5HQPRA1RXVzjtYjAnFDI5w48V3IFnIAsG1/+jvZM2i0KxKv7ZbwnIjOWAVLrhy5nKHVWBKxuIp1Ozpx+vHTIEuS6Xxl6R/jZsRy3Xt29T788uF3ypY7e6hnGDOn1uCoabXYU4Dj3NUXwdTU/G+luSEETdeLXre6XFDI2YxcYGh1OCt3wQitZq6bGY2rmN5cjSniIl2gYIlaWkiYOXI5JuN4QkXfUBwtUwx3SvQC01NJo7Nb6zC9uTqvI2fexQXTOXJFOXJ5GgJXBX34t8VzkVT1jD5yIlelbyiOZY+8g0f/vqvg9xJitaUxhJZUNWl2eLU7dbcmmFJgLzlV07CnPS3kwpG0kKup8mMoFZ6wftbRqlaTqoa7V2zEs2/uw5Ov7h6xXVSc1tUETCdhvE6AKJawrjkpzpmZLUYIcXf7yFCFIeSMfXTUtFr4FBnb9/eN+DtN17FpdzeOaKyCpuvYvGekIJ0I3msbwKoNbWP+nZq9soO5/43HxXJc1iW6jMc19A3FIUkwcxJz7es12w4DADp6hjEwHMeBzjBmttRgzpGN2HGgb4QjCxjuYCGO7MMv74IkSbjgY+/HzKm12J9DkBk3N12QAPx9fZv5fdywqxuyJGHhKUeidzBmnju52LCjExKAT394FlZtaMezq8cOiUcsVatVZtWqij2HBrArJWrLzcGuMGIJFad8oAUATIeoEEcuO6LhtzioDbVGusv2fcb5LpoBT6kLmutmuyVPzpq+I8gVpUlfj5SM0Oo7u7oQT2j46D+1YlpTFXTduMblLHawnMPrdxph/HK5cu3dYUxvrsbs6fXY2zGYM8z78rqDeHndQQDG/NWSQ9CL49flEiGeDwo5m5EKLHYQ66pmhlbTXyzhgk1vqobfZyzj1TsYQ3d/FDf89k08u3pf3pwEa2g1MEqOnEj4PCLLkevqjyIcTWL29Hq0NlebRRfZmHdxKSu+rsYIH462TFcutzK7/Yi4eJ41/0iccEyz6QxZ11oFgHff64auA+/s7CrYCu+0VCuZjpwlOTapaugfiqPJkhA9pTblyI3hdh04HEY8qeGYGfXoHoihsy9i5sgZ6xcmzMnObD9i5shljl/Xddz31Gas29GFI1tqsONA/whnzHTkqgMI+o1GrH3juJNMJI0CmvrU88UEL/5/X2sdQgEFm7PcwKRq9GAS7oPfp+DYmfXYtm+kSNvXMYiB4QTOPf1ohAIKNu4urvE0YOyTVe+04ZUN7SMev/+vW3H/X7eOGVo297+Zo5jZk1CE6/1ZTkw8qWIgHENddQByquI5+72GIgls3duHD6Qcm+37+nCoJ4wZU2tw7MwG9A3F0d0fhaqlq1bFGMYKra7f0YU1Ww/jsx85GlPqgph1RG1OR277/j6Eo0lccMb7EEuoeHGtceF6Z2c35hzZgLlHGWOzLg4+FEnglw+/Yz62fkcnZrfW4eKz5mDOkQ1Yu6Nz1LEBRj6cBOPcEY7PcDSJux7fiF/95V1HCl3EDcjHT5wBIJ0j2DRKsUPakcsSclntSo6b1YjtB/qhabrZGqqpLpR25Nwi5Cz50gLRiiR3aNWPUECBJAHDsQRWbWhHY20AH5jViCNTN3ZyDhVhbQjc1R8xz81338sUcuFoAq9vOjTi2qXrOrbv78vduy7rmtHeHc44nwaH4whHk5jeVI3Z0+sQi6sZOXOA8b1++OVdePjlnRgcjmMoksjpyDW5TIjng0LOZgotdhBfIlG1KkKr4rmvbjyEgF/G7NY6AOmE+9c2tuNA5xD+/NJO3PTHtTmrFK3d+UOj5MgJAdMypQpTG6owHEtiOJowk6GPnl6H1qZqdA/E8q7NB6RDcOLON/tCK1i9uQNXLluFTXsyL+bi4iomiCOmVOF7n/snfPYjR8OnyDj+fUavsuyqVTFJhKNJM2w1FlYhV1vlR1VQyXDkegZj0JEZfmmoDUDC2B2/d7UZY/j0qUcBMMSB6cilWrMIwZZ25ERoNfOceWVDO97cchif//j78a+fPg6qppuVd4LB4TgkALWpVTXqa4IF5cjpuo5ILGkKQXHxEuFVMfErsowPzGrElqz3FT2YWiwT4dyjp5h5SFZEWPWEY6di3tFTsGl3d8HFQFY0Tccfn9+B3/11Kx5YuS3jfbbu68OBziHourHsDmCIiPfaRjqJyTEcUVPImY5cupVG/1Acjal91VATHOHIrdvRCU3X8YUz3w+fIuG1jYeQVHXMnFqDOUc2ADBCrrlCq6MVu0RiSTz43DbMbKnBZ/7P0QCMgouB4cSIJtDrtnch4JOx6NSj8E/vb8bzb+3H8ue240DnEE44thlHTTfmE2vBw5Ov7sGGXd148rU9iMaT2La3F/OONtYgPenYqdjXMTTmuR+JG8sNSpJknj8bd3ejsy+KgXAcb28bWwxOdLL8e239qAn5MPfoKagKKmjvHkbAJ5vLCeaioVAhd1QjIrEkDnQOmc2Ap9QHjWKBoC9naNXIqxtE31CsbMUQUUsVukD8nsuRM0SchOqgD5t292DT7h6cvWAWZFnCzBYjhSK70AEw5mRN19Efjpu5sKd8oAXb9/eZIrF3MIZfLF+Le5/cbH5PBe/s6sYvlq/FE1mRh/6hGH5w56tY8Yrx+Ds7u3D9vavxp7/tMP9GiLbW5mrMbq0HAOzJKnjYtLsHkVgSkZiKv719AMDIilUg7dbmOt+37evFAyu3ma69k1DI2Uyh7UdMERRK58hpuo54QkN3fxSrN3fgzBNnmsnyU1ItMN7cehjHHtmAr587D7sODuDp1/dmvG5S1ZBUtXRoYJTQqhAwRzRWoaXROIE7+6LYfWgAiizhyJZatDYbX95DPcPYsKsbj696z8xJy3bkjmypxZwjG/DyuoMjJqr+oRj+8Nw2hKNJ/L+/vJsRrlu7vROKLGHmVOOOT5IkzD+uxSxuOPEYIeQywxyDwwnMPaoRPkXGuh1dY+90GELOEHA+SJKElsbMylUxAU+1hF98ioz6msCYjtyugwNoqAng5DlTTVfRLHYI+aHpulnVm87RGhla7RmI4qEXd2DuUY045/SjcczMetSEfHhnZ+ZnHBg2QrfCVZraEMKug/1j5qE9sHIbrrvndbMauS51YavPCisBwAePnoKO3kjGxCZCqKLRMWC0yNEt295rG8DeQ4PYsLMLR0+rQ0NNAB96fzO6B2Ij7pYL4f6/bsXf1h7A/zl+mtH88510GPX5t/ajtsqPmVNrsHqzcYG454lNWPrAGvM8i8SS2Lq3N91HzsxRFMUmxvdG9CYUF+7W5mrUVvnx2rvt6AvHzVzE+ho/hmPJjEn97W2dmNoQwrEzGzB7er3pAM1sqcGRLbWoCip4beMhROJJ8/0BUbWa37F6bNV76BuM4SuL55o3MaIFirXgQdd1rN3RiePf14SgX8H5H5ttCMpN7air/v/t3XdglPX9wPH3jVz2npeEJJBACDMhIZE9JcgSRcWq2ArVUlCqlv6Ko3VQbSkWqoB1IValKnuL7CUQSEISAtkJIXtcBtnj7vn9cbnHHCQIMmLa7+sfyD33PPe953v3vc/z+Y7HgvBgD2ytLHBztJInPJRW1nMoPh9rSzXx6WWculCC3iDRL8B4x47BQW7Aj3eRGcdiGdsy0x1lLl6qRGOhxM3RisPx+Z3uW1JZz/KvzvHM8iO88vFp/rUtmZ0nL5GSW3lN0H/hUgVL/x17Q2PQsguv0NPbAaVCIQ8TcGlb5LszDu0mO8APy890lJED48x605AL032ZXRyszMpnMEgcTyrkpQ9P8cZnZ3lx9fc8/94J9p/Nu26mslVvILe45roXPk3Nei7kVLDr5KUOgw/jIr9XZeQ6GDfd0NSKpYVK/nzZWlmQU1SDnbUF44f4AMjt89UTBACG9vVApVSw4VAmCRllaF1tuDfCF71BIiW3krKqBv76ZRzl1Y042WnkTLHJgbZ7ne4+lUtB2Q+f6eNJRVTXNrP9RA4bDmfy4Y4LoIDvk4rkiznT7G0vFxu0LjZYWqjILDAP5M6klsj3pD4Qa/wsuneQkbPUGHu/rv585ZXW8u6mJI6cK2Dr8WuHudxt6h9/inArjLNWjV+8usYWNh3JIilLh8EgIQFIEs72VvTuYbxCN3WtmhqQY0mFctYoOrKHfFxne0su5FSgN0g8NrE3IwZquXipgqMJhUwbHiBfZZp+7K/uIugoo1ZW2YBV2wfXNHC9pLKe5OwKfD3ssFAr0boalz5Jz69ix4kc6hpb2XM6l6h+nvLq5u0binFhPny08yIXL1Xg7WpL6uVK/Dzt2X48h6YWA3/4RRjr9qSwckMirz4Zjp21BUcTC4kM8ZDHol1tUKArdtYWcldo+0Y1rI87GgsV5zLKmDUmkG0nsumldSQ82N3sGBVXGjmWWEhsWplZANLD3Y7YtDJKKurxdLGRv8BX34vRyd7ymjFydY0tGAwSlm238skurKaXtwNqlZIAL3vS86vlxTVN96I1ZXGuXjPPNOvPYJBYtycFgwF+NSXE2GgqFAzo5cr5bB0GSZIbUtP4NpOZI3vyt/XxfPZtKr+dOaDDH6zc4hqOJRQiAd8cygSMY+wAeQmS9mtnhQQYMzMpuZWMGKilqVnPtuM5BHjZE+jjKD+vp9YBjVpJ6uVKLNRKVm5IlLdNHWbMIg3oaTzWN4cyeXhsIO7u9teUryMZ+VWcOF/E5Cg/HhkXRFVNE4fj85kc6UdZdQOJmeVMHR6AtaWKjYezOBCbx/lsHQoFfLkvjf97bAj/3JhIRn61/Hk2BdKmf48lFrLrVC6jBxm74drPWp0Y7su2Ezlo1Ep8QzwA4zgpMK7D16pvYvOxbJKydNwX5YdCoSDI15HMgmoUgNbVFqVSwYwRPeVzbrojhrEMnc9aLSir5VBcAWNCvc3Ot2/bEih5pbXysS4V11BZ08SDo3sBEOjtyIpnR15zTH9Pe3nCw6aj2ahVSl6cPZi3v4hjw6FMNGqlnEHUutrg7mRFQmY5Y8N8Oq0jY+bnh8+NtUZFU7Oe8D7u9PCwZ8PhTPJLa+Vyg3Fc4r6zeWw7noNapWB8uA+66kYuFV/hbKpxrGFokBtPTOqDi4MVDU2trNuTQsWVJjYczuS3Mwd0Wp76xlYKyuvkXoIe7nZk5ldfdw05uDYjp1Qo0FgozcbIgTFYc3O0Ii6tlL7+xqDXdNsvN0crinR18q3d/rUtmcQsHT21Djw8LoiGplbi08v46mAGRxIKeOmJcLMsoSRJJGbq2HgkkyJdPTNH9WTezEFmr19QXsf+s3mculAsZ5FPXSjm5Tnh8sUjGDOljrY2Zvtq1EoUCvPlR+obW83acVOCITqyhxyg+5oych20K1pXW+67x49dJ3NRKGBypB+BPo7yxUt+WS0NTa383y/CyMiv5uuDGeQW1+DvZU9heR0XL1UyaWgPTiYXs+7bVF5+IhwUxu9lnx5O2Fqp2RtzGQdbDb+dOYCVGxI5cq6AacMDKK6oR61S4OZojVKpIKy3GyeTi5gxIgAnO0uaW/Scyygnsq8H1pZqeRJQRxk5MGbl2t+mq7Synn9uTMTaUs2AXq58F3OZ0CA3eQhFVxCB3B2mUioorWzgna/PkV9aS21DKxF93bHSqFEojOsTJWXr5KsCUyA3tK8HcWllfHUgA4UChvf3MlvvyNnOEr1BQqEwPhfgvih/Tl0o4VBcPjNG9qSkop5P96TQU2vPiAFawDiuzFKjYvepXEqrGtDrJTLyq/B0tqa6rhkPJ+u2zJTxtTYezkJ3pZH59/cHwMPZBoUCthzNplVv4HcPDSIxS8epC8XG2bEalZwRAggP9sD+YAbr92dQWdNodiubWWN6EeLvzO9nh/KXz2NZtfk8YX3caGrWE93WHdkRexsN/1w0Um5A2o8v6hfggoVaSVKWjre+iOVySS0qpYLfzw6VG9ii8jre/OwsNfUthAQ48+DoQHn/B0b3IiGznA92XOCVOeHyEhpX34vRxd5SDrANBon1B9I53HZVqVYpGDXYm5LKBka1jccJ9HUkPb9abqDt2hpX0xg2UwBnb6PBx82WXadyCevjzsG4fC5cquTJycFy4ArGYDbmYgmXimro5e2AJElU1jSZTVII9HHkwdG92Hgki63HsxnW3wsvFxs5oJMkiQ2HM7G1tqCvnxOxbd1dDm3BoN1VXatgzCbZ21hw8ZIxkNt75jKVNU38ZkZ/swbdQq0k0MeRCzkVJGSU4+Viw4Oje1FV20RUP0/AeAU8c2RP9sTk8qe1ZxjS14Ph/TypbWghNq0Ua42a0N5uDOzlKp83gyTx9cEMnOw03D+iJ2AcO7lmazLfxuSSmKlDqVQwfogPBoPEpsNZ/OdABh7O1kwbFsCne1J4Y91ZiivqCevtJmdu1VcF0luP52CpUXGwLXPU/mJhfLgve2JyaW4xyMu8mILe+PQyNh81TkKYMSJA7vrs7ePI3rb3bAoKoiP90Lra8PHOi3J3uKkser2B2NRSYtNKyS2pxdHGgjmT+/KfAxlYaVQ80BacmdhaWeDqYCmPa7tS18wnuy5ipVHJWbTO+HnaEZdexsc7LxCbWsqMEQEEejsypI87cWllhPb+IRuuUCgYHOTG0YRCyqsbOBRfQG9fR8J6u9OqN/Dt6Vy0rrZmGTkwtmtVtc0MH6jF39Oercez2Xo8mycmBWNnbZyJvuVYNrnFNYT1duOJScFmF3INTa0cTShk2/FsXv0khlljAinW1VN5pYnwPu6cTS1l/OVK+V6/7dU3tvLepkRodws5UxByvRmr0HFW2spC1eEtvSYN7cF/DmSQUWC8YDPtMyjIlYTMct76PBZrSzXpeVU8NrE3E8J95e/iuDAfEjLLeX9rMuv3p/ObGf3byt7C59+lcSalFE8XGwYFurLteA5Bfi706+FIbUMLW45mcTShELVaybD+XkQEu2OQYNXmJFZvPs/U4f7U1LfQx9fJGGBflZEzdX9fPUau/aQIe2uLtmycr/yYu5M1GrWyw65VgGnDAjh9oYTy6kZCe7sZh8UEuBCbVoZGrWTxL8LoqXXA09maLceyOBSfz1NTQjgYn49apWTKMH/8vez5eOdFNhzOpH9PF8qrG3lobCChQW5sP5HD0BAPArwcGNDThYNx+URH+lGkq8fD2UYu18xRPTmbWsqOEzk8Obkv57MraGrWExniia21MZAzZd464tL2vTqRVER6XhWnLhRjoVay5PEhuDtZc6noCuv2pPDX3wzr5FN053XrQC4nJ4clS5ZQVVWFk5MTy5YtIyAgoKuLZWZiRA9UKgUFZcZBzo9O6I2fp3nmoa6xhS++SyO3pFae8amxUPHsgwP5+mAGx88XyT8IJqZGLriHk5wN8PWwY3CgKwfi8rHSqDicUIhKqWTBzIFyw6NQKFjy2BAOxudzJqUEjVpFnx5OZBVUU13XTERb5srGysI4tuNKIyMHaokMMf74WqiVxu7HygYmDe3B4CA3Bge5MXt8EOfSy8yCKtPzx4R6s+tkLuF93Jkc5UdxRT1VtU1MjjIGa54uNsyfOYAV3yRQcLKO/gHO15yjq7UPGkyv6Winwdu+RCpIAAAYQUlEQVTVBjsrNV+QRkFZHU9M6sPBuHxWbznP/Jn9cXO0ZtXm8xgkeGNepDxg18TFwYpf3RfCmq3nefXjGPmuD1c33E72lqTkVhKXVkZMSgmxqaWMCfXG192OS8VXONIW1AV6O8j19O3py/LSMaaZYJ/sugj80OWqVCp4dtZAln4Wy1ufx3KlvoWJ4b6MDTXPfgzs5SpnmPr3dOF8lo7LpbVMGtrD7HnRUX5k5Fez62Quu07mygFNX38nzqaWkpJbyeP39iGstxuJWTpaWg1yMChn5Nr9ICsVCkL8nbmYW8Gh+Hy+jcklIti9w6vRvv7ObD2WDcD//SJMDqTbmzGyJ+PDfTkUl8+xpCLi2zIvHk7WNLboOZtailKhoLevI0G+jjS3GMgpquHX00LkH8rQ3m4421uy+Wg2tlbGmc1Obd+JPj2cSMurYvb4IEKDjFfmqZereGhsIFPu8edYYiGnkovl92zq4nRztOLlOeGs359OXFqZ2Y+anbUFowd5cyAuH8e2rlXTv18fzMDdyZo/Pj7ELBAJbMtomcYVmQwKdGPZ/GFmFz8qpYLUy1VcuFSJs70lAV72ZORX8/qnZ9AbJB6/t49ZwG7Sw8OeS0U1JGfr2HQkC111Iy88Mvi6Y8DAOPYV4ExKKfdF+TF1WAAAE8N9iUsrY3Af82z24CA3DsTm8/JHp2nVS+yNMWZqsgqvyGNTNWolQb4/ZAytLdU421sS4ueMUqlg0tAe7D6VS0JmORoLY7bOwVbDgpkDCA92vyZ7bG2pZnKUH0OC3flibyrr96cDMH6IDw+PC+LSx6dZvz+dX07ui4ezNSm5laRdrqKl1UB20RVKKur5zf395SymKRN4vcWAAfkuLO0vZjSdBHITI3rg4WzNJ7tS8HL5Ies1NtQHJ1tLPt2TQn1ZHb+e3o9h/b3M9lUoFIT1dmf6iAC2Hc9hQE8X9AaJnd/nUFXbzIOjezE5yg9Jgn98k8DKr+JxsNVQ39hKS6uBe4f2YOowf7PPxdypIXy88yJp3xiHN6iUCiTJ/PZcJlYaNefSy5EM4O1mQ2lVg9xGAcweH0Rzq8Hse6BUKtC62Xa63JPGQsWvp/XjRFIRgd7G8x7Vz5PELB2/nTmAoLa6sLGyYFh/L04mF6NUKjh9oYSoEA8cbDTc08+T7MIr7Dubx+kLxdjbWDCkjztqlZKHxwXJrxUd6Wc8LxsSyCutpW+7gN7D2YaxYT4cji+gp9aBw+cKsLO2oK+/8V6qWlcbVEplp13sPu62nMso59M9KViolYwL82FylJ/82fntzAGcOF/0k8b63i4KqStf/RY9+eSTzJo1i/vvv5/t27ezefNmPv/88xveX6ervaOrT7u721NWduurprfqDdcESMk5OlZ8k8iT0cFmXRwZ+VX89ct44+s7GYOSkA5+QE3HVSoVKBUKWlr1xKWX4e9pL4+De+uLWGrqW3j9qaFmV9fvbz1PRn41bz9zzzVT2Tt7nbKqBvm4ndl/No9vDmXy+9mD5S68G9HSauA37xxhWH8vnp7eDzCm4D2drQn2c6asqkEOisB4db340VC5cenIhsOZnM/WMTTYg+EDveSuZpPjSYWs25Mq//3IuCA5MAXIL60l5XIlE8J95SVosgqvEOjtgEKhoLahhXc3JeLjZktEsAf9e7qYNSTJOTpWbkgkNMiNhQ8M7PCqd9vxbM6mllJcUY+7kzUzRgQQ1c/TLCiAH9YgTMur4si5ArMFYAO87Hl5TjhqlZId3+dwIqmIZfOHoVAoiLlYwvr96fxj4QizH65jiYV89q3xvXs4W/P72aEdji/JzK/m7S/jGDHQi3lT+3V6rk2cXWw5evYy9jYWBHjZI2GcaZiYWU5Cho7C8joMkkQvbwdenhNuFsyfz9aRW1zD+CG+Zj9A6XlVpF6uZPrwABQKBdW1TaTnVxPRQbAAxtmbH+64wG9m9MfP0x69wUBmfjW9eziZvZ6uupFl/4nnmen9CfJ1RFfdyB/+dRI7awtemRMu332lvS++SyPE35mItgx6Z1ZvOU9Slo4HRvVkUqTxno/VtU18/l0aDU2t/P7R0GvqGGDHiRy2tQ0CV6sUPDdrkFmXbWda9QYOxuUTGuR2TbmTsnSMCPOl5or5TO5XPj6Nm6M1j4wL4mhCAUcSCtFYKPlldF9SL1dyPKmI8D7uLHxwIAAXL1WgALPvdXFFPSeTi6lraGFQoCsh/s7yWLTrkSSJk8nFnM/W8cvJfbG2VHMuvYzVW8+bjUc2LX1iaaFi9vjeDAr84Vw0NetZ9p94Hp3Q+0e7xM5llOHpbIN32/qIa3ddxMvVRg54r1bXNhu9/aLaYFwe6Epd8zUXj+3pDQbe/iKOnLZFbH3d7fjVfX3p1XZBCMZZxYcSCtFV1qNSKRkb6t3phe/lkhp57bijCYUcSyxk9vggJkaYX/DtO5vHmZQSCsvr5EkPYb3deG7WoI4OK/v8uzTSLlfy1tP3XPd57bVfYNukSFfHe5vP09jUilKp4IWHB8vBtkGSWLc7he+Ti7kvys8sgDORJIn/HMgg7XIlNQ0tPDiql9wbAsYM9R8/PEVTsx5rSzUPjwuUL44vl9TQqpfMznF7eoOBkooG1Gol9m3jqTtzu37zO6JUKnB17fiz020DOZ1OR3R0NDExMahUKvR6PVFRUezbtw8XlxsLArpLINeRVr2Bw/HGsTJXN36Xiq/gaGvZ6RizG1VZ04Rapbjm6r+2oYWWVsMtH78j9Y0t8mSJm7HteDZD+rh32qDVNrSQXVhNka6eqEHeOFndejK6pr6ZiitNqFSK6zbOP1V5dQPO9pYd/mi319JqvFdrR2NVriZJEklZOkqrGgju4YSvh528nyRJxnWhlB3/bdKqN3A+S4ePhx3ujp0PFpckidMXSwgNcruhgP/Hvi96g8HYfWytMevq+jkwSBIbDmUS1c+TntqOfxBuVE19M6166aa/Xw1NraTkVmJvY4Gns801gcRP1VG9SJJkVu9JWeW4O1mjdbWVAy1PFxs563I3VNc2kVlQTXFFPb19nQjycey02+/nzDjhpICIYA8CfRw6/H791N+WllY9qna3F7uaaYhGoa4Ob1fbH81YGmd+tv7o826V3mDgzMVSQnvfWFvSkcz8auqbWukX4HxNYuR2EYHcTUpOTuaPf/wju3fvlh+bMmUKy5cvp3///l1YMkEQBEEQhLujW4+Ru1XdOSMn/HSiXn6eRL38PIl6+fkRdfLz1FUZuW67jpxWq6WkpAS93tifr9frKS0tRavVdnHJBEEQBEEQ7o5uG8i5uroSEhLCrl27ANi1axchISE3PD5OEARBEAShu+vWXauvv/46S5Ys4f3338fBwYFly5Z1dZEEQRAEQRDumm4dyAUGBrJx48auLoYgCIIgCEKX6LZdq4IgCIIgCP/rRCAnCIIgCILQTYlAThAEQRAEoZsSgZwgCIIgCEI31a0nO9yqu3H7lu54i5j/BaJefp5Evfw8iXr5+RF18vN0p+rlesfttrfoEgRBEARB+F8nulYFQRAEQRC6KRHICYIgCIIgdFMikBMEQRAEQeimRCAnCIIgCILQTYlAThAEQRAEoZsSgZwgCIIgCEI3JQI5QRAEQRCEbkoEcoIgCIIgCN2UCOQEQRAEQRC6KRHI3QE5OTnMnj2b6OhoZs+ezaVLl7q6SP81Kisrefrpp4mOjmb69Ok8++yzVFRUAJCQkMCMGTOIjo5m7ty56HQ6eb87sU3o2OrVqwkODiY9PR0Q9dLVmpqaeO2115g0aRLTp0/nT3/6E3D9dupObBPMHT58mJkzZ3L//fczY8YM9u3bB4h6uduWLVvG+PHjzdosuPv1cEt1JAm33Zw5c6Rt27ZJkiRJ27Ztk+bMmdPFJfrvUVlZKZ0+fVr++29/+5v00ksvSXq9Xpo4caJ09uxZSZIkac2aNdKSJUskSZLuyDahY8nJydK8efOkcePGSWlpaaJefgaWLl0qvfXWW5LBYJAkSZLKysokSbp+O3Untgk/MBgMUkREhJSWliZJkiSlpKRIoaGhkl6vF/Vyl509e1YqLCyU2yyTu10Pt1JHIpC7zcrLy6Xw8HCptbVVkiRJam1tlcLDwyWdTtfFJfvvtHfvXumXv/yllJiYKE2dOlV+XKfTSaGhoZIkSXdkm3CtpqYm6ZFHHpHy8vLkRlHUS9eqra2VwsPDpdraWrPHr9dO3YltgjmDwSBFRkZKsbGxkiRJ0pkzZ6RJkyaJeulC7QO5u10Pt1pH6ptPRArXU1RUhKenJyqVCgCVSoWHhwdFRUW4uLh0cen+uxgMBr766ivGjx9PUVER3t7e8jYXFxcMBgNVVVV3ZJuTk9PdeZPdyLvvvsuMGTPw9fWVHxP10rXy8vJwcnJi9erVxMTEYGtry+9+9zusrKw6backSbrt20TbZ06hUPDPf/6TBQsWYGNjQ11dHR999NF1fz9Evdw9d7sebrWOxBg5odtaunQpNjY2PPHEE11dlP95586dIzk5mccee6yriyK0o9frycvLo1+/fmzZsoXFixfz3HPPUV9f39VF+5/W2trKhx9+yPvvv8/hw4f517/+xfPPPy/qRfhJREbuNtNqtZSUlKDX61GpVOj1ekpLS9FqtV1dtP8qy5YtIzc3lw8++AClUolWq6WwsFDeXlFRgVKpxMnJ6Y5sE8ydPXuWrKwsJkyYAEBxcTHz5s1jzpw5ol66kFarRa1WM23aNAAGDx6Ms7MzVlZWnbZTkiTd9m2CuZSUFEpLSwkPDwcgPDwca2trLC0tRb38DFzvd/xO1MOt1pHIyN1mrq6uhISEsGvXLgB27dpFSEiISGHfRitWrCA5OZk1a9ag0WgAGDBgAI2NjcTGxgLw9ddfM3ny5Du2TTD3zDPPcOLECQ4dOsShQ4fw8vJi7dq1/PrXvxb10oVcXFyIiori+++/B4wz43Q6HQEBAZ22U9drw37qNsGcl5cXxcXFZGdnA5CVlYVOp8Pf31/Uy8/AnTjXd7KOFJIkSbf7JPyvy8rKYsmSJVy5cgUHBweWLVtGr169urpY/xUyMjKYNm0aAQEBWFlZAeDr68uaNWuIj4/ntddeo6mpCR8fH5YvX46bmxvAHdkmdG78+PF88MEH9OnTR9RLF8vLy+Pll1+mqqoKtVrN888/z5gxY67bTt2JbYK5HTt28PHHH6NQKABYtGgREydOFPVyl/3lL39h3759lJeX4+zsjJOTE7t3777r9XArdSQCOUEQBEEQhG5KdK0KgiAIgiB0UyKQEwRBEARB6KZEICcIgiAIgtBNiUBOEARBEAShmxKBnCAIgiAIQjclAjlBELqVJUuWsHLlyi55bUmSeOmllxg6dCgPPfRQl5ThTvrzn//MmjVruroYgiDcBHFnB0EQbsn48eNpaGjg4MGD2NjYALBx40Z27NjBF1980cWlu73i4uL4/vvvOXr0qPxe29uyZQuvvPKKvMahs7MzUVFRPPPMM/Ts2fOGXmPJkiV4enrywgsv3Nay34g333zzhp/bleUUBOEHIiMnCMItMxgMfP75511djJum1+tv6vkFBQX4+Ph0GMSZhIaGcu7cOWJjY/nss8+wtLTkwQcfJD09/VaLKwiCcA0RyAmCcMvmzZvHp59+ypUrV67Zlp+fT3BwMK2trfJjc+bMYePGjYAxi/Xoo4/y9ttvExERwYQJE4iPj2fLli2MGTOGYcOGsXXrVrNjVlZW8tRTTxEWFsYTTzxBQUGBvC0rK4unnnqKyMhIoqOj2bNnj7xtyZIlvPbaazz99NOEhoYSExNzTXlLSkqYP38+kZGR3HvvvWzYsAEwZhlfffVVEhISCAsL47333rvuOVGpVPj5+fH6668TGRnJ6tWr5W2LFi1ixIgRhIeH8/jjj5ORkQHAN998w86dO1m7di1hYWHMnz8fgI8++oiJEycSFhbGlClT2L9/f6evu2rVKhYtWsTzzz9PWFgYDzzwAKmpqWbnZ86cOURERDB16lQOHjxodn5M3dYxMTGMHj2aTz/9lGHDhjFy5Eg2b978o+UcNWoUYWFhREdHc+rUqeueI0EQbp0I5ARBuGUDBgwgMjKStWvX/qT9k5KSCA4OJiYmhmnTpvHiiy9y/vx59u/fz/Lly3nzzTepq6uTn79z504WLFhATEwMffv2ZfHixQDU19czd+5cpk2bxsmTJ1m5ciVvvPEGmZmZ8r67du1i/vz5xMfHyzctb+/FF1/Ey8uL48eP895777FixQpOnTrFww8/zBtvvCFn3BYtWnTD7+/ee++V7w0LMHr0aL777jtOnTpFv3795PLPnj2b6dOnM2/ePM6dO8cHH3wAQI8ePVi/fj1xcXE8++yz/OEPf6C0tLTT1zt48CCTJ0/mzJkzTJs2jQULFtDS0kJLSwvz589nxIgRnDx5kldffZXFixfL9/y8Wnl5OTU1NRw7doy33nqLN998k+rq6g7LmZ2dzfr169m0aRPnzp1j7dq1+Pj43PA5EgThpxGBnCAIt8WiRYv48ssvqaiouOl9fX19mTVrFiqViilTplBUVMTChQvRaDSMHDkSjUbD5cuX5eePHTuWoUOHotFoeOGFF0hISKCoqIgjR47g4+PDrFmzUKvV9OvXj+joaPbu3SvvO2HCBMLDw1EqlVhaWpqVo6ioiPj4eBYvXoylpSUhISE8/PDDbN++/aefGMDDw4Pq6mr574ceegg7Ozs0Gg3PPfccqamp1NTUdLr/fffdh6enJ0qlkilTpuDv709SUlKnz+/fvz+TJ0/GwsKCp556iubmZhITE0lMTKS+vp5nnnkGjUbDsGHDGDduHLt37+7wOGq1moULF2JhYcGYMWOwsbEhJyenw+eqVCqam5vJysqipaUFX19f/Pz8bvAMCYLwU4nJDoIg3BZ9+vRh7NixfPTRRwQGBt7Uvq6urvL/TRMF3Nzc5McsLS3NMnJeXl7y/21tbXF0dKS0tJSCggKSkpKIiIiQt+v1embMmCH/rdVqOy1HaWkpjo6O2NnZyY95e3uTnJx8U+/naiUlJTg6OsrlWblyJXv37qWiogKl0ng9XVlZib29fYf7b9u2jXXr1sldyPX19VRWVnb6eu3Pj1KpxNPTU87geXl5ya9pen8lJSUdHsfJyQm1+oefCWtra+rr6zt8rr+/Py+//DKrVq0iMzOTkSNHyhMiBEG4c0QgJwjCbbNo0SIeeOAB5s6dKz9mmhjQ2NgoB0hlZWW39DrFxcXy/+vq6qiursbDwwOtVsvQoUNZt27dTzquKXNWW1srl7WoqOiWg5EDBw7IweXOnTs5ePAg69atw9fXl5qaGoYOHYokSQAoFAqzfQsKCnj11Vf57LPPCAsLQ6VScf/991/39dqfH4PBQElJCR4eHvI2g8EgB3NFRUUEBATc9Hu6upwA06dPZ/r06dTW1vLnP/+Zd955h+XLl9/0sQVBuHGia1UQhNvG39+fKVOmmC074uLigqenJ9u3b0ev17Np0yby8vJu6XWOHj1KbGwszc3NvPvuuwwePBitVsvYsWO5dOkS27Ztk8eEJSUlkZWVdUPH1Wq1hIWFsWLFCpqamkhNTWXTpk1mGb0bpdfrycvLY+nSpZw5c4aFCxcCxsBTo9Hg7OxMQ0MDK1asMNvP1dWV/Px8+e+GhgYUCgUuLi4AbN68WZ4c0ZkLFy6wb98+Wltb+fe//41Go2Hw4MEMGjQIKysrPvnkE1paWoiJieHQoUNMmTLlpt/f1eXMzs7m1KlTNDc3o9FosLS0NMv8CYJwZ4hvmSAIt9XChQuv6X5bunQpa9euJSoqiszMTMLCwm7pNaZNm8aaNWuIioriwoULctbHzs6OtWvXsmfPHkaNGsXIkSN55513aG5uvuFjr1ixgoKCAkaNGsWzzz7Lc889x/Dhw294f9Os1vDwcJ588klqa2vZtGkTwcHBAMycORNvb29GjRrF1KlTCQ0NNdv/oYceIjMzk4iICBYsWEBQUBBz587l0UcfZfjw4aSnpzNkyJDrlmHChAns2bOHoUOHsn37dlatWoWFhQUajYYPPviAY8eOcc899/DGG2/w97///aa7wjsqZ3NzM//4xz+Iiopi5MiRVFRU8OKLL970cQVBuDkKyZTPFwRBELq9VatWkZubyzvvvNPVRREE4S4QGTlBEARBEIRuSgRygiAIgiAI3ZToWhUEQRAEQeimREZOEARBEAShmxKBnCAIgiAIQjclAjlBEARBEIRuSgRygiAIgiAI3ZQI5ARBEARBELopEcgJgiAIgiB0U/8P6X4/G1kifRMAAAAASUVORK5CYII=) **Illustration of Shrinkage Estimator in $\text{Sym}_{+}^{n}$** ###Code def logm(spd_mats): """ could be two_dim,three_dim,four_dim spd_matrix : [n,n] , [p,n,n] , [p,N,n,n] Returns: ------------ return logm_m """ n = spd_mats.shape[-1] SPDmanifold = spd.SPDMatrices(n) if spd_mats.ndim == 2 or spd_mats.ndim == 3 : log_m = SPDmanifold.logm(spd_mats) elif spd_mats.ndim == 4: p,N = spd_mats.shape[0] , spd_mats.shape[1] log_m = (SPDmanifold.logm(spd_mats.reshape(-1,n,n))).reshape(p,N,n,n) else : print("error exception!!!") log_m = None return log_m def expm(spd_mats): """ could be two_dim,three_dim,four_dim spd_matrix : [n,n] , [p,n,n] , [p,N,n,n] Returns: ------------ return exp_m """ n = spd_mats.shape[-1] SPDmanifold = spd.SPDMatrices(n) if spd_mats.ndim == 2 or spd_mats.ndim == 3 : exp_m = SPDmanifold.expm(spd_mats) elif spd_mats.ndim == 4: p,N = spd_mats.shape[0] , spd_mats.shape[1] exp_m = (SPDmanifold.expm(spd_mats.reshape(-1,n,n))).reshape(p,N,n,n) else : print("error exception!!!") exp_m = None return exp_m #isometric embedding of SPD(n) matrix in R^{n(n+1)/2} def SPD_to_Euclidean(spd_matrix): """ spd_matrix : Symmetric positive definite matrix Returns: numpy array of dimension (n(n+1)/2,) Isometric Embedding of n x n Symmetric positive definite matrix in R^{n(n+1)/2} """ n = spd_matrix.shape[0] sym_matrix = logm(spd_matrix) diag = np.diag(sym_matrix) uppTri = np.sqrt(2)*sym_matrix[np.triu_indices_from(sym_matrix, k=1)] vecd = np.hstack((diag,uppTri)) return vecd #isometric embedding of R^{n(n+1)/2} matrix in SPD(n) def Euclidean_to_SPD(euclidean_vec): """ eucliden_vec : Euclidean column vector Returns: [n,n] Isometric Embedding of R^{n(n+1)/2} into n x n symmetric definite matrix """ q = euclidean_vec.shape[0] n = (int)((- 1 + np.sqrt(1+8*q))/2) diag = euclidean_vec[:n] off_diag = euclidean_vec[n:]/(np.sqrt(2)) sym_matrix = np.diag(diag) i,j = np.triu_indices_from(sym_matrix, k=1) sym_matrix[i,j] = off_diag sym_matrix[j,i] = off_diag spd_matrix = expm(sym_matrix) return spd_matrix #samples SPD matrices according to log-normal distribution with mean 'mean' and covariance 'cov' def log_normal_sampling(mean,cov,sample_size=1): """ mean : SPD matrix n x n cov : SPD matrix n(n+1)/2 x n(n+1)/2 Returns: list of SPD matrices of size 'sample_size' TODO vectorize conversion part """ mean_euclidean = SPD_to_Euclidean(mean) samples_spd = [] manifold = spd.SPDMatrices(mean.shape[0]) samples_euclidean = np.random.multivariate_normal(mean_euclidean, cov, sample_size) for i in range(sample_size): sample_spd = Euclidean_to_SPD(samples_euclidean[i]) samples_spd.append(sample_spd) samples_spd = np.array(samples_spd) return samples_spd if (gs.all(manifold.belongs(samples_spd))) else None # testing the embeddings between SPD and euclidean mat = 4*np.array([[2,-1,0],[-1,2,-1],[0,-1,2]]) euclidean = SPD_to_Euclidean(mat) spd_mat = Euclidean_to_SPD(euclidean) print("Original matrix\n" , mat) print("Embedding of the Matrix\n", euclidean.shape) print("Back to SPD space\n" , spd_mat) #sampling from log_normal distribution sample_size = 5*2000 mean = 4*np.array([[2,-1,0],[-1,2,-1],[0,-1,2]]) q = (int)((mean.shape[0] * (mean.shape[0] + 1))/2) cov = np.eye(q) samples = log_normal_sampling(mean,cov,sample_size) print(samples.shape) if samples is not None : print(samples) else: print("some error has occured!") #given N SPD matrices, computes frechet mean using Log Euclidean Metric using closed form def frechet_mean_LE(spd_matrices): """ Parameters: ------------ spd_matrices : array of spd_matrices Returns ------------ mean : array """ n = spd_matrices.shape[1] log_samples = logm(spd_matrices) log_mean = log_samples.mean(axis=0) mean = expm(log_mean) return mean def batch_frechet_mean_LE(spd_mats): """ spd_mats : [p,N,n,n] Returns --------- fms : frechet means shape [p,n,n] """ p,N,n = spd_mats.shape[0],spd_mats.shape[1],spd_mats.shape[2] logs = logm(spd_mats) mean = logs.mean(axis=1) #[p,n,n] fms = expm(mean) #[p,n,n] return fms def emp_cov(spd_matrices): """ spd_matrices : array of spd_matrices Returns: [N,n,n] -------- Emperical Covariance of euclidean embeddings of spd_matrices """ N = spd_matrices.shape[0] euclidean_embeddings = [] fm = frechet_mean_LE(spd_matrices) euclidean_fm = SPD_to_Euclidean(fm) for spd_mat in spd_matrices: euclidean_embedding = SPD_to_Euclidean(spd_mat) euclidean_embeddings.append(euclidean_embedding) euclidean_embeddings = np.array(euclidean_embeddings) #(N,m) centered_embeddings = euclidean_embeddings-euclidean_fm #(N,) emp_cov = np.einsum('ij,ik->jk',centered_embeddings,centered_embeddings) normalized_cov = emp_cov/N + (1e-9)*np.eye(emp_cov.shape[-1]) manifold = spd.SPDMatrices(emp_cov.shape[0]) return normalized_cov if gs.all(manifold.belongs(emp_cov)) else None fm = batch_frechet_mean_LE(samples[None]) print(fm) fm = frechet_mean_LE(samples) print(fm) ec = emp_cov(samples) if ec is not None: print(ec.shape) print(ec) else : print("Not Positive Definite! Error") ###Output (6, 6) [[ 1.00969489e+00 -1.64540478e-02 8.56520583e-04 -5.78645694e-03 1.34100294e-02 4.96436284e-03] [-1.64540478e-02 1.01736144e+00 7.71105676e-03 7.72131953e-03 -2.83596934e-03 1.70607003e-02] [ 8.56520583e-04 7.71105676e-03 9.81065311e-01 6.91907426e-03 -2.36388233e-02 -8.74618949e-03] [-5.78645694e-03 7.72131953e-03 6.91907426e-03 9.95143776e-01 -1.70689166e-03 7.66627804e-03] [ 1.34100294e-02 -2.83596934e-03 -2.36388233e-02 -1.70689166e-03 1.01010976e+00 -1.33832841e-04] [ 4.96436284e-03 1.70607003e-02 -8.74618949e-03 7.66627804e-03 -1.33832841e-04 9.91253380e-01]] ###Markdown **Generative Model for Synthetic Data**$\Sigma_i \sim \text{Inv-Wishart}(\Psi,\nu) \,\,i = 1,\dots,p$ $M_{i}|\Sigma_i \sim \text{LN}(\boldsymbol{\mu},\lambda^{-1}\Sigma_i) , i=1,\dots,p$$X_{ij}|(\text{M}_{i},\Sigma_i) \sim \text{LN}(\text{M}_{i} , \Sigma_i) , i=1,\dots,p \text{ and } j=1,\dots,n$where $\Psi ,\boldsymbol{\mu},M_{i},X_{ij}\in P_{N} $ and $ \nu > N-1$ and $\lambda>0$ ###Code def Synthetic_Data(mu,psi,nu,L,p,N): """ mu: PSD matrix (n,n) psi : PSD matrix (n*(n+1))/2 x (n*(n+1))/2 p: real number , N : real number , nu: real number Returns -------- X : numpy array shape : (p,N,n,n) """ # if not check_constraints(mu,psi,p,N,nu,L): # print("Parameters are not valid! Error!") # return None #generate covariance matrix required sigmas = invwishart.rvs(df=nu,scale=psi,size=p) if sigmas.ndim == 2: sigmas = sigmas[None] scaled_sigmas = sigmas/L Ms = [] for i in range(p): M_i = log_normal_sampling(mu,scaled_sigmas[i],sample_size=1) Ms.append(M_i) Ms = np.vstack(Ms) X = [] for i in range(p): X_i = log_normal_sampling(Ms[i],sigmas[i],sample_size=N) X.append(X_i) X = np.array(X) #[p,N,n,n] return X,Ms def LE_error(est,true): """ est : [p,n,n] true : [p,n,n] Returns ------------- returns Average LogEculidean Error between two matrices """ diff = logm(est) - logm(true) mean_error = np.trace(np.matmul(diff,diff),axis1=1,axis2=2).mean() return mean_error def get_MLE_Error(data,true): """ data : [p,N,n,n] ----------- Returns : Computers MLE and returns the error """ n = data.shape[2] fms = batch_frechet_mean_LE(data) #[p,n,n] return LE_error(fms,true) mu = np.eye(3) psi = np.eye(6) nu = 15 L = 50 p = 100 N = 7 data,Ms = Synthetic_Data(mu,psi,nu,L,p,N) print("Means shape",Ms.shape) print("Data shape",data.shape) def sum_dist(X,mu): """ X : (p,n,n) mu : (n,n) """ p,n= X.shape[0],X.shape[1] log_X = logm(X) log_mu = logm(mu) diff = log_X - log_mu[None] sum_LE = np.trace(np.matmul(diff,diff),axis1=1,axis2=2).sum() return sum_LE def intialization(data,covs): """ data : [p,N,n,n] covs : [p,n,n] Returns: -------- x : numpy array with all parameters in single vector """ #useful_terms p,N,n = data.shape[0],data.shape[1],data.shape[2] q = (n*(n+1))/2 sum_cov = covs.sum(axis=0) #[n,n] #mu_0 is frecht mean of frechet means fms = batch_frechet_mean_LE(data) #[p,n,n] mu_0 = frechet_mean_LE(fms) #[n,n] #L_0 num_coeff = N/p _sum_dist = sum_dist(fms,mu_0) num = num_coeff * _sum_dist term1 = (N/((N-1)*(p))) * np.trace(covs, axis1=1, axis2=2).sum() term2 = _sum_dist/p denom = term1 - term2 L_0 = num/denom #nu_0 sum_inv_cov = np.linalg.inv(covs).sum(axis=0) trace_terms = np.einsum("ij,ji", sum_cov, sum_inv_cov) # tr(A,B) trace_coeff = (N-q-2)/(p**2 * q * (N-1)) num = q+1 denom = (trace_terms)*(trace_coeff) - 1 term1 = num/denom term2 = q+1 nu_0 = term1 + term2 #psi_0 coeff = (nu_0-q-1)/(p*(N-1)) psi_0 = coeff * sum_cov theta = np.hstack(( L_0 , psi_0.ravel(), nu_0 , mu_0.ravel() ) ) # print("mean" , mu_0) # print("L_0" , L_0) # print("nu_0" , nu_0) # print("psi_0", psi_0) #print("theta" , theta) return theta ###Output _____no_output_____ ###Markdown **parameters of optimization**$\lambda : $ scalar $\hspace{2.8cm}$ constraint : $\lambda >0$ $\hspace{1.9cm}$ $\lambda = x[:1]$$\Psi : \frac{n(n+1)}{2} \times \frac{n(n+1)}{2} \hspace{0.6cm} $ constraint : SPD matrix $\hspace{1cm}$ $\Psi = x[1:q]$$\nu : $ scalar $\hspace{2.8cm}$ constriant $ :\nu \geq q$ $\hspace{1.9cm}$ $\nu = x[q+1:q+2]$$\mu : n \times n \hspace{2.8cm}$ constraint : SPD matrix $\hspace{1cm}$ $\mu = x[q+2:]$**parameter vector x**number of parameters $ = \frac{8+4n^2+n^2(n+1)^2}{4}$ ###Code def theta_to_normal(theta,q,n): """ theta : np array q : scalar n : scalar Returns: ------------ L psi nu mu """ q_s = (int)(q**2) L = theta[:1] psi = theta[1:q_s+1].reshape(q,q) nu = theta[q_s+1:q_s+2] mu = theta[q_s+2:].reshape(n,n) return L,psi,nu,mu def SURE(theta,fms,covs,N): """ theta optimization parameter fms: \bar{X_i} i [p,n,n] covs: S_i [p,q,q] N: real ---------------- Returns loss scalar """ n = fms.shape[1] p,q = covs.shape[0] , covs.shape[1] q_s = (int)(q**2) L,psi,nu,mu = theta_to_normal(theta,q,n) #constants term_1 = (L+N)**-2 term_2 = (N- (L**2)/N)/(N-1) term_3 = (nu + N - q - 2)**-2 term_4 = (N-3 + (nu-q-1)**2)/((N+1)*(N-2)) term_5 = ((N-1)**2 - (nu- q -1)**2)/((N+1)*(N-2)*(N-1)) term_6 = -2 * (nu-q-1)/(N-1) #trace trace_1 = np.trace(covs, axis1=1, axis2=2) #[p,] trace_2 = np.trace(np.matmul(covs,covs), axis1=1, axis2=2) #[p,] trace_3 = trace_1 ** 2 #[p,] trace_4 = np.trace(np.matmul(psi,covs), axis1=1, axis2=2) #[p,] trace_5 = np.einsum("ij,ji", psi, psi) #[1,] #dist log_X = logm(fms) log_mu = logm(mu) sum_LE = np.sum((log_X-log_mu[None])**2 , axis=(1,2)) #[p,] #total loss loss_1 = term_1*(term_2 * trace_1 + L**2 * sum_LE) #[p,] loss_21 = term_4*trace_2 + term_5 * trace_3 loss_22 = term_6*trace_4 + trace_5 loss_2 = term_3*(loss_21+loss_22) loss = (loss_1+loss_2).mean() #print("loss" , loss) # print("L" , L) # print("nu" , nu) return loss def get_bounds(q,n): q_s = (int)(q**2) n_s = (int)(n**2) bounds = [(1e-3,None)] for i in range(q_s): bounds.append((None,None)) bounds.append((q,None)) for i in range(n_s): bounds.append((None,None)) return tuple(bounds) def fit_shrinkage_estimator(data): """ data:numpy array Returns : ----------- """ p,N,n = data.shape[0] , data.shape[1] , data.shape[2] fms = batch_frechet_mean_LE(data) covs = [] for i in range(data.shape[0]): if emp_cov(data[i]) is not None: covs.append(emp_cov(data[i])) else: print("not working") covs = np.array(covs) q = covs.shape[1] theta = intialization(data,covs) bounds = get_bounds(q,n) print("opt started") res = optimize.minimize(SURE, theta, args=(fms,covs,N),bounds=bounds, method='L-BFGS-B') if res.success is True : print("converged......") print("computing shrinkage estimator....") theta = res.x L,psi,nu,mu = theta_to_normal(theta,q,n) log_X = logm(fms) log_mu = logm(mu) temp = ((N * log_X) + (L * log_mu[None]))/(N+L) M = expm(temp) num = psi[None] + covs denom = nu+N-q-2 sigmas = num/denom return M,sigmas else : print("ERROR! didn't converge") return None,None def get_MLE(data): fms = batch_frechet_mean_LE(data) return fms MLE_M = get_MLE(data) shrink_M,shrink_sigmas = fit_shrinkage_estimator(data) SPDmanifold = spd.SPDMatrices(shrink_M.shape[1]) print(SPDmanifold.belongs(shrink_M)) mle_le_error = LE_error(MLE_M,Ms) jse_le_error = LE_error(shrink_M,Ms) print("distance between true and mle" , mle_le_error) print("distance between true and shrinkage" , jse_le_error) print("Improvement ",mle_le_error/jse_le_error) ###Output distance between true and mle 0.0870515302983075 distance between true and shrinkage 0.06345948045029746 Improvement 1.3717655688418018 ###Markdown **Ablation Study**As, first we show decreasing error by increasing $N$ to test the correctness of estimator ###Code #scores = decreasing_error() # Ns = np.arange(10,100) # plt.plot(Ns,scores) # plt.xlabel("Varying N") # plt.ylabel("Log Euclidean Error") ###Output _____no_output_____ ###Markdown Following Image shows the correctness of the implementation of the estimator![download (5).png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAZ4AAAEQCAYAAACAxhKnAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nOzde1zUZdr48c/McD4zMMAgeCpFFFDTMraoLA7uhkEZUWaHrWxd3dxst2S3fTyUPq0+z/asldY+ux0eq61+dMAViZCyFbVSS/GApxQUYTjIQeUMM9/fH+TkNAqjMsPper9evV7O93jNlXJx39/7e98qRVEUhBBCCAdR93YAQgghBhcpPEIIIRxKCo8QQgiHksIjhBDCoaTwCCGEcCgpPEIIIRzKyVE3Ki4uJiMjg/r6evz8/FixYgXDhw+3OMZoNLJs2TIKCgpQqVQ8/vjjpKWlAbB69WpycnJQq9U4OzuzYMEC4uLiAFi6dClfffUVLi4ueHh48OyzzxIdHQ3AAw88QHl5OV5eXgA8+OCDzJgxw1FfWwghxE8pDvLAAw8oWVlZiqIoSlZWlvLAAw9YHfPJJ58ojzzyiGI0GpWamholLi5OKS0tVRRFUTZv3qw0NTUpiqIoBw4cUCZNmqQ0NzcriqIoX3zxhdLW1mb+82233Wa+5qxZs5QvvvjCrt9NCCGE7RzS1VZTU0NRURHJyckAJCcnU1RURG1trcVxOTk5pKWloVar0Wq1xMfHk5ubC0BcXBzu7u4AREREoCgK9fX1AEydOhVnZ2cAJkyYQEVFBSaTyRFfTQghxCVySOExGAwEBwej0WgA0Gg0BAUFYTAYrI4LDQ01f9br9VRUVFhdLysri6FDhxISEmK179133+WWW25Brf7xq61cuZLp06fz+9//nsrKyp76WkIIIS6Dw57x9JTt27ezatUq3njjDat9GzZsYP369bz77rvmbStXrkSv12M0Gvnb3/7Gk08+yXvvvefIkIUQQpzHIYVHr9dTWVmJ0WhEo9FgNBqpqqpCr9dbHVdeXk5MTAxg3QLatWsXTz/9NGvWrGHkyJEW527cuJH/+Z//4a233iIwMNDimtDZynrwwQd55ZVXMJlMFi2i7tTVNWIy9d8p7QICvKipaejtMPoUyYklyYc1yYmlS8mHWq3C39/zovsdUngCAgKIjIwkOzublJQUsrOziYyMRKvVWhw3bdo0MjMzSUxMpL6+nvz8fHPrZc+ePSxYsICXXnqJcePGWZy3adMmXnjhBd58803CwsLM2zs6OqivrzcXog0bNjB69OhLKjoAJpPSrwsP0O/jtwfJiSXJhzXJiaWeyodKURwzO/XRo0fJyMjgzJkz+Pj4sGLFCkaOHMns2bOZP38+0dHRGI1GnnvuObZu3QrA7NmzSU9PB2DGjBmUlZURHBxsvubKlSuJiIjg+uuvx9nZ2aKQvfXWW7i6ujJr1iza29sBCAoK4tlnn7VqLXWnpqahX/8F1Om8qa4+29th9CmSE0uSD2uSE0uXkg+1WkVAgNdF9zus8PRnUngGHsmJJcmHNcmJpZ4sPDJzgRBCCIeSwiOEEMKhpPAIIYRwKCk8QgghHEoKj53sOXqKxW9sp8MoU/cIIcT5pPDYSX1DG6VVDdQ3tPZ2KEII0adI4bETPy8XoLMACSGE+JEUHjvx83IF4LS0eIQQwoIUHjs5V3ikxSOEEJak8NiJl4czGrVKnvEIIcRPSOGxE7VKhY+nixQeIYT4CSk8duTn5SpdbUII8RNSeOzIz0taPEII8VNSeOzIz8uV09LiEUIIC1J47MjPy4WG5nbaO2T2AiGEOEcKjx35yrs8QghhRQqPHZnf5WmU7jYhhDjHYYWnuLiY9PR0kpKSSE9Pp6SkxOoYo9HI0qVLiY+PJyEhgczMTPO+1atXc/vttzN9+nTuuusuCgoKzPuam5t58sknSUhIYNq0aWzatMmmffZmnjbnrLR4hBDiHCdH3Wjx4sXMnDmTlJQU1q1bx6JFi1i7dq3FMevXr+fEiRPk5eVRX19PamoqsbGxhIWFERMTwyOPPIK7uzsHDx5k1qxZbNmyBTc3N15//XW8vLzYuHEjJSUl3H///eTl5eHp6dnlPnv7cfYCKTxCCHGOQ1o8NTU1FBUVkZycDEBycjJFRUXU1tZaHJeTk0NaWhpqtRqtVkt8fDy5ubkAxMXF4e7uDkBERASKolBfXw/Ap59+Snp6OgDDhw8nKiqKzZs3d7vP3s7NXnD6J11tpxtaOVnd4JAYhBCir3FI4TEYDAQHB6PRaADQaDQEBQVhMBisjgsNDTV/1uv1VFRUWF0vKyuLoUOHEhISAkB5eTlDhgy54Hld7bM3tUqFr5eLVVfbB5u+56UP9zgkBiGE6Gsc1tXWU7Zv386qVat44403HHbPgACvyz430M+dxjYjOp23eVtpVSN1Z1sJDPRCpVL1RIjdOv/+opPkxJLkw5rkxFJP5cMhhUev11NZWYnRaESj0WA0GqmqqkKv11sdV15eTkxMDGDdAtq1axdPP/00a9asYeTIkebtoaGhlJWVodVqzedNmTKl2322qqlpwGRSLv2LA56uTlTVNlFdfRaA1jYj5dUNKMDxk3V4ujlf1nUvhU7nbb6/6CQ5sST5sCY5sXQp+VCrVV3+wu6QrraAgAAiIyPJzs4GIDs7m8jISHMxOGfatGlkZmZiMpmora0lPz+fpKQkAPbs2cOCBQt46aWXGDdunNV5H3zwAQAlJSXs3buXuLi4bvc5gp+3q8XggpOnOosOwBkZZi2EGIQcNpx6yZIlvPPOOyQlJfHOO++wdOlSAGbPns3evXsBSElJISwsjMTERO655x7mzZtHeHg4AEuXLqWlpYVFixaRkpJCSkoKhw4dAuDRRx/lzJkzJCQk8Ktf/YrnnnsOLy+vbvc5gp+nC40tHbR3GAEorfpxUIEUHiHEYKRSFOXy+pAGkSvpaisoLOfNTw+yYk4sOj933s47xKbvygCYmxrF5DFBPRnqBUmXgTXJiSXJhzXJiaV+19U2mPl5n5s2p7N1U1rZQLDWo3ObtHiEEIOQFB478/X8YfaChlZMikJpdQNjh/mjAs42SeERQgw+Unjs7FyLp66hler6ZlrbjAwL8cbT3ZkzTe29HJ0QQjieFB4783L/YfaChjZKKzsHFoQHeeHr6cJZ6WoTQgxC/e4F0v7GPHtBQyulVSpUKhgS6Im3hzOnpatNCDEISeFxAD+vznd5mlo60Ad44uKswcfTheMVMmJGCDH4SOFxAD8vVyprm2hp6+DqMD8AvD1c5BmPEGJQkmc8DuDr5UJVfTM1Z1oZGtQ5tt3H04Xm1h9fLBVCiMFCCo8D+Hm50t5hAjoHFgD4eHTO0XZWWj1CiEFGCo8DnFuJFM4vPJ3bzsgAAyHEICOFxwHOrUTq4+mC7w9/9v7hxdIzjdLiEUIMLlJ4HOBc4TnX2oHOIgQyUagQYvCRwuMA57raLAqP+RmPFB4hxOAiw6kdwNvDhVmJo4m5KsC8zdVZg4uTWp7xCCEGHSk8DnLrNWEWn1UqFT6eLvKMRwgx6EhXWy/qfIlUWjxCiMFFCk8v8vFwlolChRCDjsMKT3FxMenp6SQlJZGenk5JSYnVMUajkaVLlxIfH09CQgKZmZnmfVu2bOGuu+4iKiqKFStWWJz3zDPPmJfDTklJYcyYMXz++ecAvPzyy8TGxpr3nVtyuy/w9pQWjxBi8HHYM57Fixczc+ZMUlJSWLduHYsWLWLt2rUWx6xfv54TJ06Ql5dHfX09qampxMbGEhYWRnh4OMuXLyc3N5e2Nssf1itXrjT/+eDBgzz00EPExcWZt6WmprJw4UL7fsHL4OvpwtmmdkyKglql6u1whBDCIRzS4qmpqaGoqIjk5GQAkpOTKSoqora21uK4nJwc0tLSUKvVaLVa4uPjyc3NBWDYsGFERkbi5NR1rfzwww+ZPn06Li4uXR7XF3h7uGA0KTS1dPR2KEII4TDdtniMRiMPP/wwr7/++mX/MDcYDAQHB6PRaADQaDQEBQVhMBjQarUWx4WGhpo/6/V6KioqbL5PW1sb69ev56233rLYvmHDBrZs2YJOp+OJJ55g4sSJlxR/QIBX9wddhiEhPgA4uTqj03nb5R7n2Pv6/ZHkxJLkw5rkxFJP5aPbwqPRaDh58iQmk6lHbmhP+fn5hIaGEhkZad527733MmfOHJydndm6dStz584lJycHf39/m69bU9OAyaT0fMDGzpmpj5+sw82ObU+dzpvqaln753ySE0uSD2uSE0uXkg+1WtXlL+w2/bibN28eS5YsoaysDKPRiMlkMv9nC71eT2VlJcYfftAajUaqqqrQ6/VWx5WXl5s/GwwGQkJCbLoHwEcffcSMGTMstul0OpydO2cJuOGGG9Dr9Rw5csTma9qTr3miUHmXRwgxeNhUeP70pz+RlZVFfHw8UVFRjBs3jrFjxzJu3DibbhIQEEBkZCTZ2dkAZGdnExkZadHNBjBt2jQyMzMxmUzU1taSn59PUlKSTfeoqKjg22+/Zfr06RbbKysrzX8+cOAAZWVljBgxwqZr2pu3zNcmhBiEbBrVdm5o8pVYsmQJGRkZrFmzBh8fH/OQ6NmzZzN//nyio6NJSUmhsLCQxMREoLOlFR4eDsDOnTt56qmnaGhoQFEUNmzYwPLly82j1z755BOmTp2Kr6+vxX1ffPFF9u/fj1qtxtnZmZUrV6LT6a74+/QEL3cnVMh8bUKIwUWlKIrNDy9MJhOnTp0iMDAQtXrwvHtqt2c8wPxVBUyO0PHgtDF2uT5IX/WFSE4sST6sSU4sOfwZT0NDA8888wwxMTHcdNNNxMTEsHDhQs6elf8pV8rX00We8QghBhWbCs+yZctobm5m/fr17Nmzh/Xr19Pc3MyyZcvsHd+A5+3hLLMXCCEGFZue8RQUFJCfn4+7uzsAI0aM4IUXXiAhIcGuwQ0GPp4uHK+QlqMQYvCwqcXj6upqNctAXV1dv5gdoK+TGaqFEIONTS2eu+++m0ceeYSHH36Y0NBQysvLeeutt7jnnnvsHd+A5+PpQnOrkfYOI85Omt4ORwgh7M6mwjN37lyCgoLIzs6mqqqKoKAgHnvsMe6++257xzfg/bgEdjtaHyk8QoiB75LmapNC0/N8zLMXtKH1cevlaIQQwv66fcZzbq62S3jdR1wCP29XAL7eX4lJciyEGARsnqtt8eLFlz1Xm7i4YSHe3BAdQt6OUlZ/vJfmVlkiQQgxsNk0c8GYMZ1v1avOW6xMURRUKhUHDhywX3R9hD1nLoDOXG7ceZL/98X36AM8eOLuGIL83Hvs+vIGtjXJiSXJhzXJiaWenLnApsEFeXl55rV0RM9TqVQkXhvOEJ0nr2Xt4x/ZRfxx1qTeDksIIezCpsEFycnJ7Ny5U97bsbNxw7VMmzKUj/59jFOnmwn07blWjxBC9BU2DS4YPnw4dXV1john0Ls2MhiAHQerejkSIYSwD5u62qZPn86cOXN48MEHrRZmi42NtUtgg1WQnzsj9N5sL6ri51OG9XY4QgjR42wqPO+99x4AL7/8ssV2lUrVI2v1CEvXRQbzwRffU1nbRLDWo7fDEUKIHmVT4fniiy/sHYc4z7Vjgvjgi+/ZfqCS6Tf0jdVShRCip3T5jKe6urrLk/ft29ejwYhOWh83RoX5sl2e8wghBqAuC09SUpLF53NLUp/z4IMP2nyj4uJi0tPTSUpKIj09nZKSEqtjjEYjS5cuJT4+noSEBDIzM837tmzZwl133UVUVJR52exzXn75ZWJjY0lJSSElJYWlS5ea9zU3N/Pkk0+SkJDAtGnT2LRpk80x96brIoMpq26krLqht0MRQoge1WVX20/fLf3pyLZLmUZn8eLFzJw5k5SUFNatW8eiRYtYu3atxTHr16/nxIkT5OXlUV9fT2pqKrGxsYSFhREeHs7y5cvJzc2lrc16GYHU1FQWLlxotf3111/Hy8uLjRs3UlJSwv33309eXh6enp42x94bJo8J4p/5h9l+oIo7dRd/EUsIIfqbLls8589UYMvni6mpqaGoqIjk5GQAkpOTKSoqslrjJycnh7S0NNRqNVqtlvj4eHJzcwEYNmwYkZGRODnZ9FjK7NNPPyU9PR2A4cOHExUVxebNmy/pGr3B19OFMUP9+aaoku8OV7P9QCVf7a/gTKOs3SOE6N8u7af4ZTIYDAQHB5tnP9BoNAQFBWEwGNBqtRbHhYaGmj/r9XoqKipsuseGDRvYsmULOp2OJ554gokTJwJQXl7OkCFDLuua53Q19YM9JV4/nFUf7OKVj/eat8VG6/njw9dd8rV0Ou+eDG1AkJxYknxYk5xY6ql8dFl4WlpauP/++82fGxsbzZ8VRaG1tbVHgrhS9957L3PmzMHZ2ZmtW7cyd+5ccnJy8Pf375Hr23uutouJHu7H4oevBcBJo+LL3eV88d1JDhypIvAS5nKTOaesSU4sST6sSU4sOWyutuXLl1t8/ul6PGlpaTYFodfrqaysxGg0otFoMBqNVFVVodfrrY4rLy8nJiYGsG4BXYxOpzP/+YYbbkCv13PkyBGuu+46QkNDKSsrM7esDAYDU6ZMsSnu3qZWqRgW8uNvGD+fMpRN35Xx+XcnSb91VC9GJoQQl6/LwnPnnXf2yE0CAgKIjIwkOzublJQUsrOziYyMtOhmA5g2bRqZmZkkJiZSX19Pfn4+7777brfXr6ysJDi4c6qZAwcOUFZWxogRI8zX/OCDD4iOjqakpIS9e/fyl7/8pUe+l6NpfdyYFKFjc6GBlBtH4ObikJ5SIYToUQ77ybVkyRIyMjJYs2YNPj4+5iHRs2fPZv78+URHR5OSkkJhYaF52Pa8efMIDw8HYOfOnTz11FM0NDSgKAobNmxg+fLlxMXF8eKLL7J//37UajXOzs6sXLnS3Ap69NFHycjIICEhAbVazXPPPYeXV/8dJZYwOZwdB6vYtq+CW68J6+1whBDiktm0Hs9g11vPeC5EURSe/7+dtLQZWTZ7CmobRhZKX7U1yYklyYc1yYmlnnzGY9MKpKLvUKlUJEwOp6K2if3Ftd2fIIQQfYwUnn5o8pggfDxd2Lij9JJe4hVCiL7Apmc8bW1tfPLJJxw4cICmpiaLfStXrrRLYOLinJ3UJEwO46N/H2P9thLukIlEhRD9iE2FJyMjg4MHDzJ16lQCAwPtHZOwwc+vH0ZFTRNZBcU4a9T8/PrOtXtqz7SwflsJfl6upNwoBUkI0ffYVHgKCgr4/PPP8fHxsXc8wkZqlYpf/iKSdqOJzC+PAtDc1kHe9lLaOkx4uTtL4RFC9Ek2FR69Xn/BiTlF71KrVTyWPJb2jh+Lz5Sxwfh4uLBxZylnm9rw9nDp5SiFEMKSTYUnNTWVuXPn8uCDDxIQEGCxT5a+7l1OGjVzUqLI/7aUMUP9GaH3Yc/RU2zcWUpFbZMUHiFEn2NT4XnnnXcAePHFFy22y9LXfYOzk5qfTxlm/hwS0LnkQ0VNE6PC/HorLCGEuCBZ+noACvRxw0mjxlDb1P3BQgjhYPIezwCkVqsI1rpTUSOFRwjR99jU4mloaODll19mx44d1NXVWby0+OWXX9orNnEF9FoPSqtk2WwhRN9jU4tnyZIlFBUVMXfuXOrr6/nTn/6EXq/n4YcftnN44nKFBHhSXd9Ch9HU26EIIYQFm1o8W7duNS+sptFoiI+PJzo6mjlz5kjx6aP0AR6YFIWqumb0Ib69HY4QQpjZ1OIxmUx4e3cuSObh4cHZs2fR6XQcP37crsGJy6cP8ADAIM95hBB9jE0tnjFjxrBjxw5iY2OZPHkyS5YswdPTk+HDh9s5PHG5gv07C09FbWMvRyKEEJZsavEsW7aMIUOGAPDss8/i5ubGmTNnZILQPszd1Ql/b1dp8Qgh+hybWjznVgGFzmWsly9ffsk3Ki4uJiMjg/r6evz8/FixYoVVi8loNLJs2TIKCgpQqVQ8/vjjpKWlAbBlyxZefPFFDh8+zAMPPMDChQvN561evZqcnBzzCqQLFiwgLi4O6JzgdNu2bfj7+wOdS2H/+te/vuT4+6MQrQcVP3mX54vvThLo60bMVTLZqxCid9hUeBRFITMzk+zsbOrq6li/fj07duygurqaX/ziFzbdaPHixcycOZOUlBTWrVvHokWLWLt2rcUx69ev58SJE+Tl5VFfX09qaiqxsbGEhYURHh7O8uXLyc3NtZo3LiYmhkceeQR3d3cOHjzIrFmz2LJlC25ubgA8/vjjzJo1y6Y4BxJ9gAdf7a80D38/3djGe/lHGDdCK4VHCNFrbOpqW7VqFR9++CHp6ekYDAYAQkJC+Mc//mHTTWpqaigqKiI5ORmA5ORkioqKqK21XEEzJyeHtLQ01Go1Wq2W+Ph4cnNzARg2bBiRkZE4OVnXyri4ONzd3QGIiIhAURTq6+ttim0gC9F60NzaQf3ZVgC+2leB0aRQ98NnIYToDTYVnk8++YTXXnuN22+/HZVKBUBYWBilpaU23cRgMBAcHIxGowFAo9EQFBRkLmLnHxcaGmr+rNfrqaiosOke52RlZTF06FBCQkLM2958802mT5/O3LlzOXr06CVdrz/T/zBn28mqBhRFoWBPOQD1DVJ4hBC9x6auNqPRiKdn5w+xc4WnsbERDw8P+0V2GbZv386qVat44403zNsWLFiATqdDrVaTlZXFY489Rn5+vrkI2iIgwMse4drduB9ahyerzuLs5IuhpomQAA8qaprw8/fA2cn2HAxEOp13b4fQp0g+rElOLPVUPmwqPDfffDMvvPACf/zjH4HOZz6rVq1i6tSpNt1Er9dTWVmJ0WhEo9FgNBqpqqpCr9dbHVdeXk5MTAxg3QLqyq5du3j66adZs2YNI0eONG8PDg42/zk1NZUXXniBiooK8yg9W9TUNGAyKd0f2MeYFAVXZw0nqxvYe6QaV2cNt0wYwvufH+H7khoCfd17O8Reo9N5U119trfD6DMkH9YkJ5YuJR9qtarLX9ht6mr7wx/+QHV1NZMmTeLs2bNMnDiR8vJyfv/739sUREBAAJGRkWRnZwOQnZ1NZGQkWq3W4rhp06aRmZmJyWSitraW/Px8kpKSur3+nj17WLBgAS+99BLjxo2z2FdZWWn+c0FBAWq12qIYDWRqVedkod+X1rP9YBXXjgkiRNvZSq1vkIX9hBC9w6YWj5eXF6tXr+bUqVOUl5ej1+vR6XSXdKMlS5aQkZHBmjVr8PHxYcWKFQDMnj2b+fPnEx0dTUpKCoWFhSQmJgIwb94881DunTt38tRTT9HQ0Pm8YsOGDSxfvpy4uDiWLl1KS0sLixYtMt9v5cqVREREsHDhQmpqalCpVHh5efHqq69ecIDCQKUP8OSbos7iGzdej6tzZ/davQwwEEL0EpVy/lTT5zGZbJtcUq0e+Csr9NeuNoB/bSkma0sxIVoPls+eQkNzO799aQv3xY8iYXJ49xcYoKQbxZLkw5rkxFJPdrVd9Ff/sWPHmgcSdOXAgQM2BSJ6R8gPc7bFxeg7W33uzmjUKmnxCCF6zUULz/lLWn/55Zd89tln/OpXvyI0NJTy8nL+/ve/m7vERN8VNSKA5BtHcNOEzkEaKpUKPy9XGVIthOg1Fy0854/6euutt/joo4/w8fEBYMSIEURFRTFjxgxmzpxp/yjFZfNwc+JXd8ZYNJH9vV1lcIEQotfY9IDm7NmzNDc3W2xraWnh7Fnp/+yP/LxcZPYCIUSvsWl415133skvf/lLHnroIUJCQqioqODtt9/mzjvvtHd8wg78vF3ZV1zb/YFCCGEHNhWep59+mqFDh5KTk0NVVRU6nY7777+fe+65x97xCTvw93Klpc1Ic2sH7q6DZ2i5EKJvsOmnjlqt5r777uO+++6zdzzCAfy8XIHOOduk8AghHO2iP3WysrJITU0F4MMPP7zoBe6+++6ej0rYlZ/3ucLTZp5IVAghHOWihWfDhg3mwrNu3boLHqNSqaTw9EN+Xi6AzF4ghOgdFy08f//7381/fvvttx0SjHCM87vahBDC0S5aeGTKnIHL3dUJNxeNDKkWQvSKy54yR1EUVCqVTJnTT8nsBUKI3mLTlDli4JHZC4QQvcWmKXPa2tpQqVQ4Ozubt7W3t3ORia1FP+Dn5cLh0tO9HYYQYhCy6QHNL3/5S/bv32+xbf/+/Tz66KN2CUrYn593Z1eb/PIghHA0mwrP4cOHGT9+vMW2mJgYDh48aJeghP35ebliNCmcbW7v7VCEEIOMTYXH29ubU6dOWWw7deoU7u7udglK2J//uSHVMrJNCOFgNhWexMREfve733H48GGam5s5dOgQCxcu5Oc//7nNNyouLiY9PZ2kpCTS09MpKSmxOsZoNLJ06VLi4+NJSEggMzPTvG/Lli3cddddREVFmZfNtuW8rvYNZj/OXiCFRwjhWDZN1LVgwQL+/Oc/k5aWRltbG66urtx111089dRTNt9o8eLFzJw5k5SUFNatW8eiRYtYu3atxTHr16/nxIkT5OXlUV9fT2pqKrGxsYSFhREeHs7y5cvJzc2lra3N5vO62jeYmWcvkJFtQggHs6nF4+rqyuLFi9m9ezdbt25l165dLFq0CFdXV5tuUlNTQ1FREcnJyQAkJydTVFREba3l1Pw5OTmkpaWhVqvRarXEx8eTm5sLwLBhw4iMjMTJybpWdnVeV/sGMz/pahNC9BKbWjylpaUWnxsbG81/Dg8P7/Z8g8FAcHAwGo0GAI1GQ1BQEAaDAa1Wa3FcaGio+bNer6eiosKm61/svMu95kDnpFHj7eFMnXS1CSEczKbCk5CQgEqlshh6e25Wg8Ewc0FAgFdvh3DFdDpv621+HjS1GS+4bzAYrN/7YiQf1iQnlnoqHzYVnp8Om66uruaVV15h8uTJNt1Er9dTWVmJ0WhEo9FgNBqpqqpCr9dbHVdeXk5MTAxg3Vrp6voXO+9yr3m+mpoGTKb++76LTudNdbX1MuVe7k5U1jRecN9Ad7GcDFaSD2uSE0uXkg+1WtXlL+yXNcOnTqfj2Wef5cUXX7Tp+ICAACIjI8nOzgYgOzubyMhIi242gGnTpi1Zr1MAACAASURBVJGZmYnJZKK2tpb8/HySkpK6vX5X513uNQcDPy8XGVwghHC4y15+8tixYzQ3N9t8/JIlS8jIyGDNmjX4+PiYh0TPnj2b+fPnEx0dTUpKCoWFhSQmJgIwb9488zOknTt38tRTT9HQ0ICiKGzYsIHly5cTFxfX5Xld7Rvs/LxcOdvYxpqsfdScbqa+oY27b7mK2HEhvR2aEGIAUyk2zJkyc+ZMi5mqm5ub+f7775k3bx6/+tWv7BpgXzBQu9r2HqvhtXX78fF0IdDHlfKaJnw8XVj88LW9EKVjSTeKJcmHNcmJpZ7sarOpxZOWlmbx2d3dnTFjxjB8+HCbghB9U/TIAFYvuMn8OW9HKe9/foSyU40MCZQlsYUQ9mFT4bnzzjvtHYfoA6aMDeb/ffE9X+2r4O5brurtcIQQA1SXgwt+/etfW3x+6aWXLD7PmDGj5yMSvcbX04VxI7R8XVSB6SI9sCaTQt6OUj788iin6m1/xieEEOd0WXi++eYbi8/vvPOOxedjx471fESiV/0sKoTaM60cOlFvta+huZ2/fljI+58f4dNvjrPwb1+x+pO9fF8m6/oIIWx3SaPafjoOoaulsUX/NHFUIG4uGrbtMxA5zN+8/UTlWV75eC91Z1t5ICmC8VcF8MV3Zfx7dxnfHa5m6SPXEabr/y/aCiHs75Le45FCM/C5OGuYPCaInYeqaW03oigKm3aVsfztbzGaFDLuv4apE4eg9XHj7luu4rlHp6AoUPj9qe4vLoQQdNPi6ejo4KOPPjK3dNra2vjwww/N+41Go32jE73iZ+NC2LLHwJY9BopKatl15BTjRmh5LHksvp4uFsf6e7sSHuTF/uJabo8d3jsBCyH6lS4Lz/jx48nKyjJ/jo6OZt26debP56ahEQPL6KF+BPi48u7Gw2jUKu699Wrirw1HfZEWb9QILXk7Smlu7cDd9bLfSRZCDBJd/pR4++23HRWH6EPUKhXTpgzjq/0VPJAYwbCQricGjBqh5dNvTnDwRB0TR+kcFKUQor+SX0/FBd02KYzbJtm2WN7VYX64OKvZV1wrhUcI0a3LmiRUiPM5O6kZM9Sf/cdquz9YCDHoSeERPSJ6ZABV9c1U1TX1dihCiD5OCo/oEVEjOpe42FcsrR4hRNcua+nrc1xcXNDpdKjVUr8GuyB/dwJ93dh3rJZbr7Ht2ZAQYnC6pKWvoXP2gvNfJFWr1dx6660sXryYwMBA+0Qp+jyVSkXUyAC+2l9Bh9GEk0Z+GRFCXJhNPx2ef/55kpOT+eyzz9izZw+5ubmkpKSwePFi/vWvf9HR0cFzzz1n71hFHxc1Qktrm5GjMnebEKILNi0Ed9NNN7Fx40ZcXV3N25qbm0lKSmLz5s2cPn2axMREq0lFB4qBuhBcT2tu7WD+qgK8PZzR+bnj5e7MEJ0nd9wwos+1gGSRL0uSD2uSE0sOXwjOZDJx8uRJrrrqxzVaysvLMZlMQOfCcN1Nn1NcXExGRgb19fX4+fmxYsUKq4XkjEYjy5Yto6CgAJVKxeOPP25ehK6rfc888wyHDh0yX+fQoUOsXr2a2267jZdffpl//vOfBAUFAXDNNdewePFiW762uETurk7ce9soDp6oo7G5nar6ZnYdOcXxigbm3RmFi7Pmkq7X2mYk/9tSDpXWM+eOKDzc5LUzIQYCm/4lP/TQQzz00EPMmDGDkJAQKioq+Pjjj3nwwQcB2Lx5MxMmTOjyGosXL2bmzJmkpKSwbt06Fi1axNq1ay2OWb9+PSdOnCAvL4/6+npSU1OJjY0lLCysy30rV640X+PgwYM89NBDxMXFmbelpqaycOFCm5MiLt9PXzz99+4y1uYe4q+ZhTwxI8amKXU6jCYKCsv519YSTje2AfDl7jJ+cf0wu8UthHAcm/o/Zs+ezX/+539SXV3N559/TlVVFcuXL+fxxx8HID4+nn/84x8XPb+mpoaioiKSk5MBSE5OpqioiNpay6G3OTk5pKWloVar0Wq1xMfHk5ub2+2+83344YdMnz4dFxcXq33C8W6eMITHpo/lcOlpXvxgN00t7d2e88rHe3k77zBB/u78YdY1jBvuz8YdpbR3mBwQsRDC3mzuu7jpppu46aabLusmBoOB4OBgNJrOrhaNRkNQUBAGgwGtVmtxXGhoqPmzXq+noqKi233ntLW1sX79et566y2L7Rs2bGDLli3odDqeeOIJJk6ceFnfQ1ye2HEhuDhpeDVrH1kFxcxMGH3RY6vqm9lztIbbY4dx100jUalUTLt+GH95fzdf7a/gpvGhFz1XCNE/2FR42tvbefXVV1m3bh1VVVUEBQWRkpLCnDlz+lTLIj8/n9DQUCIjI83b7r33XubMmYOzszNbt25l7ty55OTk4O/v38WVLHX1kKy/0Om6nujT3qbpvCk6UUfBXgOPpEbj7XHhvzf5u8oBmHFbBDp/dwBuDvTik4JiNu4s5c5bR6NW98y6UL2dk75G8mFNcmKpp/JhU+H5r//6L/bs2cPSpUsJDQ2lvLycNWvW0NDQwB//+Mduz9fr9VRWVmI0GtFoNBiNRqqqqtDr9VbHlZeXm5dbOL+V09W+cz766CNmzJhhsU2n+3HSyhtuuAG9Xs+RI0e47rrrbPnqgIxq6ym3jA9l07cnydx4iOk/G261X1EUPt9+nDFD/aCjwyLmxMlhvLZuP3nbipkUceUTkfaVnPQVkg9rkhNLPTmqzaZnPLm5ubz66qvceOONjBw5khtvvJFXXnmFTz/91KYgAgICiIyMJDs7G4Ds7GwiIyMtutkApk2bRmZmJiaTidraWvLz80lKSup2H0BFRQXffvst06dPt7hmZWWl+c8HDhygrKyMESNG2BS36FnhQV5EjdDy+bcnae+wHgV5zHCGyrpmYseFWO2bFKEj0NeNT785brUEuxCif7GpxXOxf+iX8gNgyZIlZGRksGbNGnx8fFixYgXQOXBh/vz5REdHk5KSQmFhIYmJiQDMmzeP8PBwgC73AXzyySdMnToVX19fi/u++OKL7N+/H7VajbOzMytXrrRoBQnH+vmUofzX+7vZtq+CmycMsdj39b5KnJ3UTIoIsjpPo1YzbcpQ3sk7zOHSeiKG2t5VKoToW2x6gXT58uXs3buXefPmERoaSllZGa+++irjxo3jT3/6kyPi7FXS1dZzFEXhubd20tJuZPnsKeZVTTuMJp56ZSuRw/z5dWrUBc9tbTeS8dpXeHk48x8PTr7k94LO15dy0hdIPqxJTiw5vKvt6aefJjY2lueee4677rqLZcuWMWXKFJ555hnbIhbiByqVimlThlJZ28TuI6fM2/cV19LQ3E5slHU32zmuzhoevT2SsupG3v/8iCPCFULYgU2Fx8XFhd/+9rds3LiRwsJC8vLyeOKJJ3j11VftHZ8YgCaP6Xxe8/7nR9j03UmaWjr4al8FXu7O5uUVLiZqZAA/nzKUL3eXs+NglYMiFkL0pMueQMtoNPLaa6/1ZCxikNCo1fzyF5G4uTjxdt5hnnplC98drmZKZLBNc7rdedNIRob68NanB6iub3ZAxEKInnRFk1/J6CJxuSKH+bP0kWspqThLQWE5RcfruGWibS+HOmnUzLljHIvf3MF/vv0tvp4umBRQq+GhaWMYofexc/RCiCtxRYXn/HV5hLhUKpWKEXqfyyoUgX7u/ObOKDbuPPnDteDA8To+236COSkXHpwghOgbuiw8X3311UX3tbd3P+eWEPYUOVxL5PAfnwm9u/Ew/95dTkNzO17uzr0YmRCiK10WnmeffbbLk38684AQvSkuRs/n357k6/0VxE/+8R2vo+WneWPDAe67bRRRIwMszik2nGHfsRpu/9lw89BuIYR9dVl4vvjiC0fFIcQVGxrszbAQbwr2GLhtUhgqlQqTovBO3mEMNU289NEe5t4ZzYSrO5do/+5wNX/7137aO0yEBXkxcZS8WCyEI/StZSGFuEI3xegprWrgeGXni25f7avgeMVZ7osfRZjOi9Uf7+XbQ1Vs2HKM1R/vJTzIiwAfVz7bXtrLkQsxeEjhEQPKlLHBODupKSg00NLWwYf/PsoIvQ+3TQrj9/dOZLjemzWf7OO1T/Yy/upAnr5vIvGTwzlcWk+x4cxFr2syKew5WkOHUdYEEuJKSeERA4qHmzOTInR8XVRJVkExpxvamBk/CrVKhYebE0/dM4FrRutIvfkqfnNXNK7OGm4aH4q7q4bPtp+44DUVReGf+Yf5a2Yhn3593MHfSIiBRwqPGHDiYkJpbu0gb0cp148N5qohP04c6+7qxLy7onn0jijzuj7urk7cND6UnQerqTndYnW93O0n+OK7MjzdnMjbUUpza4fDvosQA5EUHjHgRAz1Q+fnhouTmrtvucqmc+IndY6C27jT8lnPN0WVZG46yrVjglhwzwQaWzr4/NuTPR6zEIPJFb1AKkRfpFapmJ08jua2DrQ+bjadE+DrxrWRQWwuLOdnUSHUnW2l7FQjWQXHGB3ux2PJkTg7aYi5KoDPtp/gtklhuLvKPx8hLof8yxED0tVhvt0f9BNJ14XzTVElS97cYd42LNib39wVjbNT5xIM028YzvK13/LFdye5PXZ4T4UrxKAihUeIHwwP8WFOyjjaO0wE+3sQpHXH293ZYmqoq0J9iRqh5bPtpdw2KQw3F/knJMSlkmc8QpznushgbojWc3WYLz4eLhecj/COG0fQ0NzOpl1lV3Sv7Qcq2brXcEXXEKI/cljhKS4uJj09naSkJNLT0ykpKbE6xmg0snTpUuLj40lISCAzM9OmfS+//DKxsbGkpKSQkpLC0qVLzfuam5t58sknSUhIYNq0aWzatMmu31MMfFcP8WXscH8+215Ke4fxks9XFIWPNx/jtXX7+b/cQzS2yLyHYnBxWD/B4sWLmTlzJikpKaxbt45Fixaxdu1ai2PWr1/PiRMnyMvLo76+ntTUVGJjYwkLC+tyH0BqaioLFy60uu/rr7+Ol5cXGzdupKSkhPvvv5+8vDw8PT0d8r3FwPSL64fx3+/vZuu+Cm6ZMMTm8zqMJt7MOchX+yuIGqll37FadhysuqRrCNHfOaTFU1NTQ1FREcnJyQAkJydTVFREbW2txXE5OTmkpaWhVqvRarXEx8eTm5vb7b6ufPrpp6SnpwMwfPhwoqKi2Lx5cw9/QzHYRA7zZ1iIN599cwKTqet1qRRFobKuic2F5fzXe7v4an8FqXEjWJA2ntBAT7btq3BQ1EL0DQ5p8RgMBoKDg9FoOkcGaTQagoKCMBgMaLVai+NCQ39cDEyv11NRUdHtPoANGzawZcsWdDodTzzxBBMnTgSgvLycIUOGXPQ8WwQEeF3S8X2RTufd2yH0OVeak3sTI1ixdiffVzZwQ8yFF7HbUljG37P2UXum88VUPy9XFtw3kVsnDwUgYcow/m9DER0qNfrA3m2Fy98Ra5ITSz2VjwExJOfee+9lzpw5ODs7s3XrVubOnUtOTg7+/v49cv2amoZuf6vty3Q6b6qrz/Z2GH1KT+RkVIg3QX7ufJB3kFEhXlYDEY6WneYv//yOITovbr9+KKOH+hMa4IFKpTLfO3qYHyoge/P3pMaNvKJ4roT8HbEmObF0KflQq1Vd/sLukK42vV5PZWUlRmPng1ij0UhVVZXVej56vZ7y8nLzZ4PBQEhISLf7dDodzs6dC3/dcMMN6PV6jhw5AkBoaChlZWUXPE+IK6FWq0iaMpRiw1kOnai32Fd7poWXP96Lv7crv0ufwNRrwhgS6GlVnLQ+bkQO92fbvgpZSl4MGg4pPAEBAURGRpKdnQ1AdnY2kZGRFt1sANOmTSMzMxOTyURtbS35+fkkJSV1u6+ystJ8jQMHDlBWVsaIESPM533wwQcAlJSUsHfvXuLi4uz+ncXgcENUCD4ezuR8fRyjqXPm6tZ2Iy9/tJe2diPz7x7f7WqoP4sK4dTpFo6cPG3e1tjSTmv7pY+YE6I/cFhX25IlS8jIyGDNmjX4+PiwYsUKAGbPns38+fOJjo4mJSWFwsJCEhMTAZg3bx7h4Z1zaHW178UXX2T//v2o1WqcnZ1ZuXIlOl3nol6PPvooGRkZJCQkoFaree655/Dy6v/PbETf4OKsIX5yOB9vPsac//43Ab5uaNQqKmqamH93DENseG5zzWgdrs6H2bbPwBCdJzlfHyd/50mGBnuRcf81aNT2/f3wQEktb+UeYtQQH8ZfHSjLhgu7UynSvu+WPOMZeHoyJ0aTie1FVZTXNFJV10ztmRbixody0/gLDzi4kNezi9h5qBonjYqmlg7GDPPnwPE6Ztw80q5T8xwtP81/v7cbo8lEh1FBo1YxZqgf98aPtqloDmTy78ZSTz7jGRCDC4ToTRq1mtioK3tueNOEULbtq2B0eAAzbh5JeJAXa7L2sW5LMeOvDiRM1/OtdENNI6sy9+Dj6cx///ZmjhTX8O3hKgoKDaz5ZC+LH74WF2dNj99XCJkyR4g+YFSYHy8/eRML7hnP0GBvVCoVDyRF4O7qxOvZB2xa+bShuZ2j5adp77jwsUaTCZNJQVEUas+08JcPdqNWq/hd+gS0Pm6MDPUh7Zar+dUd4zDUNPHhl0d7+msKAUiLR4g+w8PN8p+jj4cLDyZFsPqTzpbP1IlDUKtVqFQqas+0YKhpxFDTRFl1IyeqzlJ7phUArY8rt8cO58ZoPU4aFQeP15H/7Ul2f3+Kcx3rKsDNVcMz911DkL+HxX3HjdBy26Qw8r89yfirAxk3wnIQkBBXSp7x2ECe8Qw8/Skn//uv/XxdVHnBfRq1iiB/d4YFexMe7IWflytffHeSo2Vn0Pq44u7qRFl1I17uzvwsKgQPNydMJgWTojBpdBDDQjpfCPxpPlrbjTz31g6aWzt47tEpqFUqvi87TXV9M9ePC8bTbeAPQOhPf0ccoSef8UjhsYEUnoGnP+WkvcPIriOnaG0zYlQUTCYFX09XQgM90Pm546Sx7DFXFIX9JbVkbztOe4eJqROHMGVskHlNoQu5UD6KDWf4z7e/xdPNibPN7ebW0tAgL3537wS8PVx6/Lv2Jf3p74gjyOACIQYRZycN10UG23y8SqUiakQAUSMCrui+I/Q+zIwfxXeHq7lqiC+jw/1obTPy2r/2s/Kfu/j9fRPx9RzYxUfYhxQeIcRFTb0mjKnXhFlse/LuGFZ9tIcV737H0/dNxN/b1WK/oigcPFFPaIAHvl6W+7pSXd/M+58fYWSoD5MiggjRenR/kuiXpKvNBtLVNvBITixdaj4Ol9bz18xCVCoVSdeFkzA5HHdXJ05UnuX9z49w8EQ9Wp/O6YL0AT++D9TeYWLnoSqiRwZYvKhqUhRW/nMXR8tOY/zh39qQQE8Cfd1obO2gqaUDdxcNv793Iq4ujhniLX9HLPVkV5tmyZIlS3oorgGrubmN/lyePT1daWpq6+0w+hTJiaVLzUeArxsTR+morm/my13lbC4s55jhDO/nH6GlzcjtscM4Wnaagj0Gxg7X4uflSrHhDH/NLOTL3eUcPFHP9WODzc+nvviujC93lfHwtDHMShxNgK8btWdaONvUjquzBk93Zw6dqDcPpHAE+Tti6VLyoVKp8OjiGaC0eGwgLZ6BR3Ji6Urycaz8DFkFxzh4oo5brwlj+g3D8XRzprK2if9+fzeNLe1cFxlMwZ5y/LxciYvRs35bCTEjA/jNjGhqz7Sy6PXtjArzZcE94y+43LiiKPzpH9/g4erEsw9OvtKvaxP5O2JJBhcIIfqMkaE+PJU+wWp7sNaDP8y6hr98sJvNheXExehJv3UUHm5O+Hq68HbeYd7JO0xVXTMqFTz88zEXLDrQ+Rv0zROG8P7nRyitaiA86McfaoqioCidP+xs1WE08danB4kc5s8N0fruTxA9SgqPEMJutD5uPPvAZE6dbmboeV1kU68Jo/ZsKxu+Og7AQ9Mi0Pq4dXmtn0WF8OGX37N5dzn3J442b38z5yC7vz9F2tSruDFaby5eLW0dFBQa8PF0YcpYy1GB/9pazLZ9Few8VEXEUD8Cfd176isLG0jhEULYlYebE0PdrJ/L3HXTSFrbjTS1dNg0oaqXuzOTI4LYtr+Cu6dehauzhq/2V7BlrwF/b1fezDnItr0VpE29mv0ltWzcUUpDczsAza0d3DKxcyXig8fr2LDtOBNHBbK/pJb38o/wxIyYi963tc1Iu9F0RbN2N7V0ANazUwxWkgUhRK9QqVTMjB/d/YHnuXlCKF8XVbLzYBWjw/14+7NDXB3myzP3TWTrXgOZm46ybO1OAGKuCuAX1w/j06+Ps/azQ6jVKq4ZrePv2UUEaz14fPo4Pv/uJB9+eZTC708x/upAi3sZTQqbC8v5+N9H6TAqzL0zirHDbZs+qLm1g+yvSjh68jQVdc2caWzD2UnN9J8NZ9qUoVYv/Q42MrjABjK4YOCRnFjqL/lQFIU//v0bPN2cUKmg/FQjS395HYF+nV1lZxrb2Lavgshh/ubpgNo7TLz88R72H6tliM6Titomnn1gMsNCvOkwmlj8xnbaO0wse2wKLs4aTIrCoeN1fFRQzLGy01w9xJfm1g4qapu4P3E0t0wY0mWM35ed5u/r93OqvoWrwnwJ0Xqg13pwzHCGbw9Vow/w4MGkCCKG+lud295h4vuT9YQHe/e5dZFkyhwHk8Iz8EhOLPWnfOR+c4L/t+l7AB6/YyzXj+1+SYq2diMvfbSHopI67r31ahKvG2red/B4HSvf20XsuGA0GjV7j9ZwurGNQF83Ztx8FddFBtHSZuS1dfvZe6yGW68ZwuhwP4xGhQ6jCbVahZuLBlcXDUdKT7Phq+P4e7sye/pYRof7WcSx5+gp3sk7zKnTLdwycQj3TL0KN5fOjqea0y2sydpHseEMGrWKscO1XDsmiGvHBDns3aWuSOFxMCk8A4/kxFJ/ysfZpjYy/vYV14zS8WjyWJvPa2s3crTsNGOG+VuNnvvf9fv5en8l7q5ORI/UEnNVAEk3jOTs6WbzMUaTiQ+++J78nSe7vE/suBDuTxh90ec5re1GsgqOkbe9lEA/Nx69fSztHSb+9q/9dBhNpE29mur6ZnYcqKLmTIu5K9GR3XMmk8LeYzVEDvM3r8nULwtPcXExGRkZ1NfX4+fnx4oVKxg+fLjFMUajkWXLllFQUIBKpeLxxx8nLS2t232rV68mJyfHvPT1ggULiIuLAyAjI4Nt27bh79/ZrJ02bRq//vWvLyl2KTwDj+TEUn/LR0Nz+w/dbbYPoe5Ka7uRsupGhgZ7mX/AXywnVXVNtBsVnNQqNGoVJkWhpc1Ia7sRFyeNuYuvO4dO1PH6hgPUnG4BIFTnybw7o81TBSmKwta9FbyRc4DEa8O597ZRNl23sq6Jb4oqiQj3Y3S43yXnqLGlnf/9VxF7j9Xw9L0TiPzhuVa/fI9n8eLFzJw5k5SUFNatW8eiRYtYu3atxTHr16/nxIkT5OXlUV9fT2pqKrGxsYSFhXW5LyYmhkceeQR3d3cOHjzIrFmz2LJlC25uncMzH3/8cWbNmuWoryqEsLOefv7h6qxhZKiPTcf+dP2iyxUx1J/nHr2Oj/99jA6jifRbR1l0qalUKm6M0XO84ix5O0oZFebLpIigi16v9kwL67eVUFBowPRDeyJM58Vtk4Zw7Zhgm0bUlVY18MrHe6g908qDSRGMGWb9HKonOKTw1NTUUFRUxJtvvglAcnIyzz//PLW1tWi1P44SycnJIS0tDbVajVarJT4+ntzcXB577LEu951r3QBERESgKAr19fWEhFzZcsRCCGFPbi5OzEzoemTfPbdezTHDGd7IOUBYkBfBPyl85aca+fy7kxQUGlAUhakTh5BwXXjnAoA7T/J/uYf4v9xD+Hu7og/wQB/gyRCdJ2GBXoQGenK6sZViwxmOlp9h614DHq5OLLz/Gq4e4mu37+2QwmMwGAgODkaj6azmGo2GoKAgDAaDReExGAyEhv44nl+v11NRUdHtvvNlZWUxdOhQi6Lz5ptv8sEHHxAeHs7vfvc7rrrqqkuKv6smY3+h0zlmfqv+RHJiSfJhra/k5E+PTOG3L37JXzP3MGG0Dp2fO96eLmzbU07hkVM4adTcck0Y9yZGEPxDV924UUHcddtoDpTUsv9YDSerGiitPMu2fQaaW41W93B31XBNRBBzZ4zH/yIv8/ZUPgbUezzbt29n1apVvPHGG+ZtCxYsQKfToVarycrK4rHHHiM/P99cBG0hz3gGHsmJJcmHtb6UExXw69QoPvzye7btKedsU+eLsVofV2bcPJK48aH4eLiA0WgVs87LhVtifpwWyKQo1J5u4WR1I+U1navTjgz1ITTAE7VaRUdrO9XV7VYx9LtnPHq9nsrKSoxGIxqNBqPRSFVVFXq93uq48vJyYmI63yI+v5XT1T6AXbt28fTTT7NmzRpGjhxp3h4c/ONUGampqbzwwgtUVFQwZEjXY/GFEKIviRzmz388dC3QOULvdGMbWh9XNOpLG+2mVqkI9HMn0M+dCaMCuz/BDhwyPi8gIIDIyEiys7MByM7OJjIy0qKbDTpHnGVmZmIymaitrSU/P5+kpKRu9+3Zs4cFCxbw0ksvMW7cOItrVlb+uFZ9QUEBarXaohgJIUR/4+KsQefnfslFp69wWFfbkiVLyMjIYM2aNfj4+LBixQoAZs+ezfz584mOjiYlJYXCwkISExMBmDdvHuHh4QBd7lu6dCktLS0sWrTIfL+VK1cSERHBwoULqampQaVS4eXlxauvvoqT04DqYRRCiH5FXiC1gTzjGXgkJ5YkH9YkJ5Z68hlP/2ynCSGE6Lek8AghhHAoKTxCCCEcSgqPEEIIh5LhXTa4lLXc+6qB8B16muTEkuTDmuTEkq356O44GdUmhBDCoaSrTQghhENJ4RFCCOFQUniEEEI4lBQeIYQQDiWFRwghGaNk1wAACQ5JREFUhENJ4RFCCOFQUniEEEI4lBQeIYQQDiWFRwghhENJ4RlA6urqmD17NklJSUyfPp3f/OY31NbWArB7927uuOMOkpKSeOSRR6ipqenlaB3rlVdeISIigsOHDwODOx+tra0sXryYxMREpk+fzn/8x38AUFxcTHp6OklJSaSnp1NSUtK7gTrIpk2bSE1NJSUlhTvuuIO8vDxgcOVjxYoV3HrrrRb/RqDrHFxRfhQxYNTV1Slff/21+fOf//xn5Q9/+INiNBqV+Ph4ZceOHYqiKMrq1auVjIyM3grT4fbt26c8+uijytSpU5VDhw4N+nw8//zzyvLlyxWTyaQoiqJUV1criqIoDzzwgJKVlaUoiqJkZWUpDzzwQK/F6Cgmk0mZPHmycujQIUVRFOXAgQPKhAkTFKPROKjysWPHDqW8vNz8b+ScrnJwJfmRwjOA5ebmKg899JBSWFio3H777ebtNTU1yoQJE3oxMsdpbW1V7rnnHqW0tNT8j2ow56OhoUGZNGmS0tDQYLH91KlTyqRJk5SOjg5FURSlo6NDmTRpklJTU9MbYTqMyWRSrrvuOmXnzp2KoijK9u3blcTExEGbj/MLT1c5uNL8yOzUA5TJZOK9997j1ltvxWAwEBoaat6n1WoxmUzU19fj5+fXi1Ha36pVq7jjjjsICwszbxvM+SgtLcXPz49XXnmFb775Bk9PT37729/i5uZGcHAwGo0GAI1GQ1BQEAaDAa1W28tR249KpeKvf/0rc+fOxcPDg8bGRv73f/8Xg8EwKPNxvq5yoCjKFeVHnvEMUM8//zweHh7MmjWrt0PpNbt27WLfvn3MnDmzt0PpM4xGI6WlpYwdO5aPP/6Y3//+9zzxxBM0NTX1dmi9oqOjg//f3t2FNPn+cRx//3TZQgqlJ0eLZEEqZDlnCWkZJqPS2oaFndSJYUGh68CeIHqEkiApsRYWVFSK9DDGNIylJ9VKRSiLrFhSQalp2ROmzfofyH9k/Xyq+zd/P/2+Du/tvq7r/p58dl/3ves6efIkx48fp6qqihMnTmC1WkdtPfxF7nhGoLy8PJ4/f47NZiMgIACNRsOrV698n799+5aAgIAR/+u+pqYGj8fDkiVLAGhqaiIzM5O1a9eOynoAaDQaVCoVaWlpAMydO5fQ0FDUajXNzc10d3cTGBhId3c3LS0taDSaYR7xP+vRo0e0tLRgMBgAMBgMjBs3jrFjx47KevxIo9H0WYPv37//UX3kjmeEOXLkCA8ePKCwsJCgoCAAZs+ezZcvX6itrQWgpKSEpUuXDucw/SIrK4ubN29SWVlJZWUlYWFhnD59mvXr14/KekDPtGJ8fDy3bt0Cet5MamtrIzw8nKioKJxOJwBOp5OoqKgRP60UFhZGU1MTz549A8Dj8dDW1saMGTNGZT1+NHHixD5r0N9ngyEbwY0gT58+JS0tjfDwcNRqNQBarZbCwkLq6urYvXs3nZ2dTJs2jcOHDzNp0qRhHrF/JScnY7PZmDVr1qiux8uXL9m5cyft7e2oVCqsVitJSUl4PB62b9/Ohw8fmDBhAnl5eeh0uuEe7j/O4XBQVFTEX3/17JqZnZ1NSkrKqKrHgQMHuH79Oq2trYSGhhISEkJZWVm/NfiT+kjwCCGE8CuZahNCCOFXEjxCCCH8SoJHCCGEX0nwCCGE8CsJHiGEEH4lwSPEf1Bqaip3794d7mEI8VskeIRQQGZmJkePHv3luMvlIiEhAa/Xq2h/ZWVlxMfHK9omwJUrV4iIiKCoqKjX8UWLFknQCcVI8AihAIvFgsPh4Oe/xTkcDlasWIFKNfjVqZQOqaEKCQnh1KlTfPr0aVjHIUYuCR4hFJCSkkJ7e7tvGR6A9+/f+zYZu3//PhkZGcTFxZGYmMi+ffvo6uryfTciIoILFy5gNBoxGo3s3buXQ4cO9epj48aNnDlzBuhZheH27dsAFBQUkJOTw9atW9Hr9aSmplJfX+877+HDh5jNZvR6PdnZ2VitVvLz8/u8Fp1Oh16v9/UlhNIkeIRQgFqtZtmyZdjtdt+xa9euodPpiIyMJCAggB07dnDnzh1KSkpwu91cvHixVxsul4vS0lLKy8uxWCw4nU6+ffsG9Cxk6na7fYt7/qyyspLU1FRqa2tJTk5m//79AHR1dbF582YsFgvV1dWkpaXhcrkGvJ6cnBzOnj1Le3v775ZEiD5J8AihELPZTEVFBZ2dnQDY7XYsFgvQs1BrTEwMKpUKrVZLRkYGNTU1vc7PysoiJCQEtVrNnDlzGD9+PG63G4Dy8nLmz5/f53pyBoOBpKQkAgMDMZlMNDQ0AHDv3j28Xi/r1q1jzJgxGI1GoqOjB7yWqKgoFixY8MuzHiGUIMEjhELi4uIIDQ3F5XLx4sUL6uvrfXcojY2NbNiwgYSEBGJjY8nPz+fdu3e9zv95Sfn/PzeCnmdFJpOpz75/DCS1Wk1nZyder5eWlhamTp3qWwDz7/rpS3Z2NsXFxbS2tg7q+0IMlgSPEAoymUzY7XYcDgeJiYm+QNizZw86nY6Kigrq6urYsmXLLy8i/BgOACtXruTGjRs0NDTg8XhISUkZ8ngmT55Mc3Nzr75ev349qHNnzpyJ0WjEZrMNuV8h+iPBI4SCzGYzbreb0tJSzGaz7/jnz58JDg4mODgYj8dDcXHxgG2FhYURHR1Nbm4uRqPRt9XFUMTExBAYGMj58+fxer24XK5eLx4MZNOmTVy+fJmPHz8OuW8h+iLBI4SCtFoter2ejo4O386nANu2bcPpdBIbG8uuXbtYvnz5oNozm808efKk32m2/gQFBVFQUMClS5eYN28eDoeDxYsX+zYJHMj06dMxmUyyFbRQlOzHI8S/WE1NDbm5uVRVVf0yFfe7Vq9ezZo1a0hPT1ekPSGGSu54hPiX+vr1K+fOnWPVqlV/FDrV1dW8efMGr9fL1atXefz4MQsXLlRwpEIMzeD/Ti2E8BuPx0N6ejqRkZEcPHjwj9pqbGzEarXS0dGBVqvl2LFjTJkyRaGRCjF0MtUmhBDCr2SqTQghhF9J8AghhPArCR4hhBB+JcEjhBDCryR4hBBC+JUEjxBCCL/6H9bHkngVYtWRAAAAAElFTkSuQmCC) Next ablation study we compare MLE and Shrinkage Estimator for varying $N$ show that that for small values of $N$ improvement is more ###Code # print(mle_scores.shape) # print(jse_scores.shape) # plt.plot(mle_scores,label = "MLE") # plt.plot(jse_scores,label ="Shrinkage") # plt.xlabel("Varying N") # plt.ylabel("Error") # plt.legend() ###Output _____no_output_____ ###Markdown ![download (7).png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAZAAAAEMCAYAAADqG+D0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nOzdeXxU5b348c9ZZjJZJ8lkm5BAIBEIS1gFI4gggVAMDS5ARXtdKrbK1d7+bhdqrUC1i3a53mul3nrVirTW4lJKpIiIVsGwryFsgYQAmez7Npk5c35/DAyEQPZMFp7368XLTOY55zx5wHznWb+Srus6giAIgtBBcm9XQBAEQeifRAARBEEQOkUEEEEQBKFTRAARBEEQOkUEEEEQBKFTRAARBEEQOkUEEEEQBKFT1N6ugDdVVNThcnV824vFEkBZWW0P1GjgEG3UOtE+bRNt1LreaB9ZlggJ8b/u+zdUAHG59E4FkEvXCq0TbdQ60T5tE23Uur7WPmIISxAEQegUrwWQ3NxclixZQmpqKkuWLCEvL69FGU3TWL16NSkpKcyZM4f169d73ispKeHxxx9nwYIFfO1rX2PDhg3eqrogCIJwDV4bwlq5ciVLly4lPT2dDRs28Oyzz7J27dpmZTZu3Eh+fj5btmyhsrKShQsXkpycTExMDL/61a8YM2YMf/jDHygvL+fuu+9mypQpWK1Wb/0IgiD0MF3XqagooampEehbwzW9rbhYxuVy9cCdJYxGEyEh4UiS1KErvRJAysrKyM7O5s033wQgLS2N5557jvLyckJDQz3lNm3axKJFi5BlmdDQUFJSUti8eTOPPvoox48f58EHHwQgNDSUkSNH8s9//pNHHnnEGz+CIAheUFtbhSRJREbGIElihP1KqirjdHZ/ANF1F5WVpdTWVhEYGNyha73yN2Sz2YiMjERRFAAURSEiIgKbzdaiXHR0tOe11WqlsLAQgNGjR7Np0yZ0XefcuXMcOHCAgoICb1RfEAQvaWioJTAwWAQPL5IkmcDAEBoaOr7Cq9+swlqxYgW/+MUvSE9PJzo6muTkZE9Aai+LJaDTzw8PD+z0tTcK0UatE+3THjo+PsYOD6XcKFS1ZwKrohgBvcP/Rr0SQKxWK0VFRWiahqIoaJpGcXFxi/kLq9VKQUEBSUlJQPMeSWhoKL/5zW88ZZctW0ZCQkKH6lFWVtvhZXBZZ8p4+5OTzBo/iNmTBmFQOxa0bhTh4YGUlNT0djX6LNE+bQsPD8TlcqFpOmL+o6WeGsK6xOVytfg3KstSqx+8vdJPtFgsJCYmkpGRAUBGRgaJiYnN5j8A5s2bx/r163G5XJSXl7N161ZSU1MBqKiowOl0ApCZmcnJkydJS0vr8boPiQokNjKQv32Ww9N/3MXOo4W4RA4uQbhh3HvvAtLTU9E0zfO9TZs2Mn36ZN5//102bdrIM8/8sMV1+/fvZfbsaTz00FLPn+eee9abVe9xXhvCWrVqFStWrGDNmjUEBQXxwgsvAO6exFNPPcXYsWNJT0/n0KFDzJ07F4Dly5cTGxsLwOHDh/n5z3+OLMuEhITw6quv4uvr2+P1DvQzsnpZMp/vOcv6bTn8cWM2H+85x+JZCSQOCenx5wuC0PsslnB2784kOXk64A4gw4ePbPO6uLhhvP762z1dvV7jtQASHx/fbF/HJa+99prna0VRWL169TWvv/3227n99tt7rH5tGR0XSuLDN7PzaCEffHGGX79zgKR4C4tmxjMovPNzK4Ig9H3z56exaVMGycnTuXDhPI2NjcTHd2wIfSDqN5PofYEsSdw6xsrNIyPYuvc8GZlnefaN3dyWZGXhbcMIDvDp7SoKwoCy44iN7YdtbRfshOlJVqaNbd8+sgkTJvPhh+9RXV3N5s0fMW/efE6cON7mdXl5Z3jooaWe17ffPouHH17W6Tr3NSKAdIJBVfjaLUO4bVw0G3fksW3/eXZmFzFvymBSpwzG10c0qyAMJJIEd9wxh08/3cLWrR/z6qtvtCuAiCEs4boCfA3cl3ITsycN4oMvzvCPHXl8frCA9OlDmTHOiiKLteyC0BXTxra/l9DT5s27k29/+yHGjZuA2dyxDXcDlQgg3SAixI/vpI9hzs1VrN+Ww9sfn+CTPedYNDOe8TeFiTXtgjAADBoUw7JlTzBq1OjerkqfIQJIN4qPNvOj+ydyMKeU9z4/zcsfHOGbc4cza2JMb1dNEIRukJ5+9zW/n5m5g7vumu95PX/+AiZNurnFHEhYWBi/+c3/9Hg9vUXS9RtnU0NnNhJC5zaBaS4XP/xDJiMHB7NswcD/xCI2yrVOtE/bwsMDOXIki6ioIb1dlT6ppzcSFhaebdH2fWIj4Y1IkWXCg30prWrs7aoIgiD0CBFAelCY2URZtQgggiAMTCKA9CBLkImKGjtOree6nYIgCL1FBJAeZDGb0HWorLH3dlUEQRC6nQggPchiNgGIeRBBEAYkEUB6UFiQO4CIeRBBEAYiEUB6UOilACJ6IIIgDEAigPQggypjDjBSKnoggtBvbdu2lYcfdufzWLr0Hlat+gkA06dPpr6+vl332L79X7zyyn+3WW7//r1861vf7FJ9vUnsRO9hYUEm0QMRhH6qtLSU3/3uV7z++joiI6PQdZ1Tp0506B5Op5Pp029n+vTeS0fRU0QA6WEWs4k8m9iBLAid4Ti5A8eJL3rk3oYRMzAMn9ZqmfLyUhRF9RyeKElSs0RS7733V7744nOqqqpYvvwpZs6cDbh7Jw8/vIzMzB1MnZrMoEExfPXVlzz//Ivs37+X//mf3zFq1GiOHj0CSKxe/Qvi4oY2e3ZNTQ0/+ckPmDbtNu65Zwn/+Z//QWVlFXa7nVGjRvODHzyNwWDA4XDwu9+9yIED+wgJCeGmm4ZTXl7G88+/CMC6dX/iX//ahqZphIVF8KMf/QSLJaxb2lAMYfUwS5CJ8ppGkQZXEPqhhIThjBo1mnvuuZNnnvkhf/vbX6iqqvS87+/vz//931p++tPVvPTSb5pd6+Pjw//931qWLXu8xX1zc0+zcOE9vPXWX7njjhTeeuv1Zu8XFtr47ncf56677mXJkvtRFIWf/ewXvP7627z99rtomsZHH20AYMOG9ykqKmTdur/x0ktrOH78mOc+H3+8iQsXLvC///sn3njjzyQnT+P3v3+p29rHaz2Q3NxcVqxYQWVlJcHBwbzwwgvExcU1K6NpGs8//zxffvklkiTx2GOPsWjRIgDKysr48Y9/jM1mw+l0MnXqVJ555hlUtW93osLMJpyaTlVtEyGBIuGUIHSEYfi0NnsJPUmWZX75y99y5kwOBw7s58svP+cvf3mbtWv/CsDs2akAjB49ltLSEux2Oz4+7v/Pv/a1tOved/DgIZ6ezOjRY9mx40vPe2VlpTz55Hd45pnVjBs3HgCXy8Wf//w2X321A5dLo6amBpPJvUhn//59zJs3H1VVUVWVlJRUDh8+AMD27V9w/PgxHnnkAQA0zUlAQPdlUPXab9+VK1eydOlS0tPT2bBhA88++yxr165tVmbjxo3k5+ezZcsWKisrWbhwIcnJycTExPDqq68SHx/PH//4RxwOB0uXLmXLli3Mnz//Ok/sGy7tBSmrbhQBRBD6qWHDEhg2LIF77lnMAw8s4sCBfQAYjUbAnY4b3B+CL/H19bvu/YzGy78LZFludl1gYCAREVHs3LnDE0A++WQzhw4dYM2a1/Dz82ft2jc4dy6/zXrrus6DDz5CWlp6B37a9vPKEFZZWRnZ2dmkpbkjclpaGtnZ2ZSXlzcrt2nTJhYtWoQsy4SGhpKSksLmzZsB99hjXV0dLpeLpqYmHA4HkZGR3qh+l1jEUl5B6LdKSorJyjrseV1cXERlZQVWa3SPPdNo9OFXv/oteXlneOml36DrOrW1NQQHh+Dn509tbS2ffLLZU37ChEls2fJPnE4ndrudbds+8bw3ffoMTypegKamJk6dOtltdfVKD8RmsxEZGemJ0oqiEBERgc1mIzQ0tFm56OjLfzFWq5XCwkIAnnjiCZ588kmmT59OQ0MD999/P5MmTepQPVo7lrgt4eGBnbouIMgXgAanq9P36C8G+s/XVaJ92ibLMqral6ZmXbzxxh8pLLTh4+ODy+Xi299+glGjRgHuI9avrO+Vr6/8WpYlJElCVWUURUaS8Lx35etLX/v6+vDLX/6aVaue4de//gVPPfUfbN/+Bffffy8hISGMHz8Ru92Oqsrce+8izpw5xTe/uZjg4GCGDh3qeX5a2gJqaqp48snHAHeP5O67F5GYeHkhwCWyLHf436hX8oFkZWXxox/9iI8++sjzvfnz5/PrX/+a0aMv58pYsGABP//5z0lKSgLgtddeo6ioiGeeeYa//vWv5OTk8PTTT1NXV8eyZct46KGHmDdvXrvr4c18IFd68qUvuDkxkn9LHdHpe/R1It9F60T7tE3kA2lda/lA6uvr8PPzp6mpiRUr/h+zZqWwYMHCDt2/z+YDsVqtFBUVecb5NE2juLgYq9XaolxBQYHntc1mIyoqCoB169bx9a9/HVmWCQwM5I477mDXrl3eqH6XhZl9xRCWIAg95rvffYKHHlrKQw/dR0xMbKsT+N3JK0NYFouFxMREMjIySE9PJyMjg8TExGbDVwDz5s1j/fr1zJ07l8rKSrZu3cqf//xnAGJiYvjiiy9ISkqiqamJzMxM5syZ443qd5nFbKKwvH07VgVBEDrqtdfe6pXnem2wcdWqVaxbt47U1FTWrVvH6tWrAVi2bBlHjhwBID09nZiYGObOncvixYtZvnw5sbGxADz99NPs27ePBQsWsHDhQuLi4li8eLG3qt8llou70W+g7MGC0Gni/xPv62ybi5zo7dDV8este87x109P8d9PTSfQz9jp+/RlYoy/daJ92hYeHsjRo8cIDY1EVQ29XZ0+pydzojudDsrLi4iIiGn2/T4xB3KjCzOLY90FoT18fQOoqalE10UWT2/RdRc1NRX4+nZ8lWrf3sY9QFy5FyQuKqiXayMIfVdAgJmKihKKis4DN8zgSLvIsozL1ROBVcJoNBEQYO7wlSKAeIFnN7pYiSUIrZIkidDQiN6uRp/UF4dBxRCWF/ibVHyMikhtKwjCgCICiBdIkuTOCyLmQARBGEBEAPESi1kklhIEYWARAcRLLGbRAxEEYWARAcRLwoJM1DU6abA7e7sqgiAI3UIEEC8RK7EEQRhoRADxkkt7QUrFMJYgCAOECCBeEiZ6IIIgDDAigHhJoL8RVZHFRLogCAOGCCBeIksSliAf0QMRBGHAEAGkh+j2Ouo/+jWO07s937OYTWI3uiAIA4YIID1A1xw0bPkftAtHcZ494Pm+RexGFwRhABEBpJvpuovGz19Hs51AMgXiqi72vBdmNlFd14TDqfViDQVBELqH107jzc3NZcWKFVRWVhIcHMwLL7xAXFxcszKapvH888/z5ZdfIkkSjz32GIsWLQLghz/8ISdOnPCUPXHiBK+88gqzZ8/21o/QLk17PsB5eifGm+9FrynBmbff855nL0i1nahQv96qoiAIQrfwWgBZuXIlS5cuJT09nQ0bNvDss8+ydu3aZmU2btxIfn4+W7ZsobKykoULF5KcnExMTAwvvviip9zx48d58MEHue2227xV/XZpyv6MpoMZGEbOxDj+ThyH/4neWINur0Py8b+8F6SqQQQQQRD6Pa8MYZWVlZGdnU1aWhoAaWlpZGdnU15e3qzcpk2bWLRoEbIsExoaSkpKCps3b25xv/fee48FCxZgNPad9LDO/EPYd6xFiU3CZ/o3kSQJKSgSAFd1CSB2owuCMLB4JYDYbDYiIyNRFAUARVGIiIjAZrO1KBcdHe15bbVaKSwsbFamqamJjRs3cs899/R8xdtJK8mjYesaZMtgfFOeQJLdP6dsdifGcVUXARAS6IMsSWIiXRCEAaHfZSTcunUr0dHRJCYmdvja1pLDtyU8PPCa33dUFlOw5SVUv0Cil/4UNTDE857LPIw8wNdZScjF68OCTdTateverz8biD9TdxLt0zbRRq3ra+3jlQBitVopKipC0zQURUHTNIqLi7FarS3KFRQUkJSUBLTskQC8//77ne59lJXV4nJ1PM/y9VJJ6vY66jf8HJfDjt/8H1DRqEJj83KSXzA1tnycF68PDvChoKimz6Wm7Kq+mG6zLxHt0zbRRq3rjfaRZanVD95eGcKyWCwkJiaSkZEBQEZGBomJiYSGhjYrN2/ePNavX4/L5aK8vJytW7eSmprqeb+wsJB9+/axYMECb1S7VZf2eriqi/Cd+xRKyKBrlpPNkehVl5fyWoJM4kBFQRAGBK/tA1m1ahXr1q0jNTWVdevWsXr1agCWLVvGkSNHAEhPTycmJoa5c+eyePFili9fTmxsrOceH374IbNmzcJsNnur2td05V4P08xHUaOvP5wmB0V65kDAPZFeUWPHqbm8UVVBEIQe47U5kPj4eNavX9/i+6+99prna0VRPIHlWh5//PEeqVtHefZ6TLkXQ0Jyq2UlcwR6QzV6UwOS0Zcwswldh8oaO2HBvl6qsSAIQvcTO9E7yLPXI3EmxnF3tlle9izldQ9jXd5MKIaxBEHo30QA6YBmez2mufd6tEUOar6UN8yzmVAEEEEQ+jcRQNrpens92uIJIBcn0kODfACxmVAQhP5PBJB2cFQW07D5v5BMAfjO+x6SwdTuayWjL5KvGf1iD8SgKpj9jWIlliAI/Z4IIG3Q7XUUvvtzdM2B79f+H7JfcIfvIZsjW5zKK3oggiD0dyKAtMFpO4GzovW9Hm2RgiJwVTVfyism0QVB6O9EAGmDOmQCQ773Jmr0yE7fQw6KQK+vRHfYAfdmwvLqRlx6x3fFC4Ig9BUigLRBkiRkn67t15DNLZfyOjWdqtqmLtdPEASht4gA4gWX94K4h7Eu5QURw1iCIPRnIoB4gedY94tLecNEXhBBEAYAEUC8QDL6IZkCPUt5xW50QRAGAhFAvES6Yimvyajib1LFbnRBEPo1EUC8RL7WUl4RQARB6MdEAPESOSgSva4c3eleeRVm9hVDWIIg9GsigHjJ5aW8JYB7JVZZVSO62AsiCEI/JQKIl1x9Kq/FbMLu0KhrdPZmtQRBEDpNBBAvudQD0a/aC1Ja1dBrdRIEQegKrwWQ3NxclixZQmpqKkuWLCEvL69FGU3TWL16NSkpKcyZM6dFBsNNmzaxYMEC0tLSWLBgAaWlpV6qfddJPv7g4y/2ggiCMGB4LaXtypUrWbp0Kenp6WzYsIFnn32WtWvXNiuzceNG8vPz2bJlC5WVlSxcuJDk5GRiYmI4cuQIv//973nrrbcIDw+npqYGo9Horep3C3d+9KsyE4oAIghCP+WVHkhZWRnZ2dmkpaUBkJaWRnZ2NuXl5c3Kbdq0iUWLFiHLMqGhoaSkpLB582YA/vSnP/HII48QHh4OQGBgID4+Pt6ofreRzRGeORB/k4qPURF5QQRB6Le8EkBsNhuRkZEoijuLn6IoREREYLPZWpSLjo72vLZarRQWFgJw+vRpzp07x/33389dd93FmjVr+t0KJjkoEr22DF1zIEkSYUFiL4ggCP2X14awukrTNE6cOMGbb75JU1MTjz76KNHR0SxcuLDd97BYAjr9/PDwwE5fe0lNzBBK9usEq/UYw2KwhgdQXtXYLffuCwbKz9FTRPu0TbRR6/pa+3glgFitVoqKitA0DUVR0DSN4uJirFZri3IFBQUkJSUBzXsk0dHRzJs3D6PRiNFoZPbs2Rw+fLhDAaSsrBaXq+O9lvDwQEpKajp83dU0Kchdj7xcVN1MoEnlWG5dt9y7t3VXGw1Uon3aJtqodb3RPrIstfrB2ytDWBaLhcTERDIyMgDIyMggMTGR0NDQZuXmzZvH+vXrcblclJeXs3XrVlJTUwH3vMn27dvRdR2Hw8HOnTsZObLzSZ56g2Rufqx7mNlEXaOTBnvv7wUpr27kdEFVb1dDEIR+xGtDWKtWrWLFihWsWbOGoKAgXnjhBQCWLVvGU089xdixY0lPT+fQoUPMnTsXgOXLlxMbGwvAnXfeSVZWFvPnz0eWZaZPn869997rrep3C8knAIy+nqW8V57KGxPe+eG1riirauSjnWfZfrgAzaXzu+XTMAf0r8UJgiD0DknvbzPRXdDbQ1gAdR+sQjIF4Df/+5y+UMXP397HU/cmMT4hrFvu314llQ18lHmWHUfcCxnGJ4Sx72QJD88fyW1J0W1c3ZIYfmidaJ+2iTZqXV8cwuo3k+gDhWyORCs+A/TOXpDiinoyMs+SmVWIJMGM8dHMnzqE0CAfvr/mKw7nlHUqgAiCcOMRAcTL5KAInGd2o2tOgvyNqIrklVN5C8vryfgqj51Hi1AUiVkTBvG1W4YQEnh5uGpcvIXM7CIcThcGVZxyIwhC60QA8TLZHAm6jl5bimyO8pzK21MKSuvIyMxjV3YRBkUmZXIM86YOJvga8xxJ8WF8frCAk+cqGT00tOXNBEEQriACiJdJQRdXYlUVuwOI2dQjPZALJbVs/CqPPceKMRhkUqcMJnXKYMz+1z/+JTEuBIMqc+h0qQgggiC0SQQQL2txrHuQiUOny7rt/hdKatmwPZe9J0rwMSrMTx7CnJtjCfJr+9wwH4NC4pAQDuWUct/sm5AkqdvqJQjCwCMCiJdJvkFgMDU7VLG6rgmHU8OgKl26d4PdyS/W7QMg7dY45t4cS4CvoUP3SIq3cPh0GYXl9Vgt/l2qjyAIA5uYKfUySZLcp/JWXd5MCFBWbe/yvfefLKHBrvEfi8Zx94xhHQ4e4A4gAIdyuq9XJAjCwNSuAOJyucjMzKSpqamn63NDuPJU3kuJpbpjIn1XdhFhZhMJg8ydvkeY2ZeYcH8On+4/uVYEQegd7QogsizzxBNP9Lv8G32VHBSJXl2K7tKa7Ubviqq6Jo7mlTN1VGSX5y7GJYRx8lwV9Y2OLt1HEISBrd1DWDfffDMHDx7sybrcMOSgCNA19NoyQgJ9kCWpy6lt9xwrQtfhltFRXa5fUrwFl66TlVvedmFBEG5Y7Z5Ej46OZtmyZcyePZuoqKhmn3K/+93v9kjlBqrLhyoWowZFEBJo7PIQ1s7sImIjAhgU1vWJ7/hoM/4mlUM5ZUxJjOzy/QRBGJjaHUDsdjspKSkAFBUV9ViFbgTypQBSVQQxY7CYfbsUQIor6jlTUM2imfHdUz9ZIinewpEzZbhcOrIslvMKgtBSuwPIL3/5y56sxw1F8jWDavSsxLIEmTh5rqLT99uV7b7P1FHd11tIig8j82gRZwqqSYjp/KS8IAgDV4f2geTl5ZGRkUFxcTERERGkpaURFxfXQ1UbuDxLeS+txDKbqMhuQnO5UOSOrazWdZ2d2UUMjw0m9OKKrs5wNVSjN1ShhLqPzx8zLBRZkjh0ulQEEEEQrqndv622bdvG3XffTW5uLmazmdzcXO655x4+/fTTnqzfgCUHRaBf3EwYZjbh0nUqOrEXJL+oFltZPbeM7lzvw1VTSuOOddT95fvUv78KV30lAP4mAzfFmMV+EEEQrqvdPZD/+q//Ys2aNdxyyy2e7+3atYvnnnuO2bNn90jlBjLZHIkz/xC6y3V5L0h1I2HBvh26z87sQhRZYvKIiA5dp5VfoOnQJpw5OwFQh07EeWYPzpxdGJPcWSDHJYTxt89yKKtq9Cw3FgRBuKTdPZDCwkImT57c7HuTJk2isLCwXdfn5uayZMkSUlNTWbJkCXl5eS3KaJrG6tWrSUlJYc6cOaxfv97z3ssvv0xycjLp6emkp6ezevXq9la9T5KCIsDlRK8r9+xGL+3gRLrLpbMru4ixwyzt3nWuFeXQ8PF/U//eT3Dm7sEwejb+972Ib8py5LA4HDmZnrKXdqUfPiN6IYIgtNTuHsjIkSN54403eOyxxzzfe/PNN0lMTGzX9StXrmTp0qWkp6ezYcMGnn32WdauXduszMaNG8nPz2fLli1UVlaycOFCkpOTiYmJAWDhwoX86Ec/am+V+zT5iqW8oZEjgI5vJjx5rpLK2qY2h690XUc7n0XTwY/QbMfBxx/jxHQMY1KQTYGecoaEZOw738FVaUMOtmK1+BEebOJQTimzJgzq4E8oCMJA1+4eyKpVq3jvvfeYPn06ixYtYvr06fztb39j1apVbV5bVlZGdnY2aWlpAKSlpZGdnU15efONaps2bWLRokXIskxoaCgpKSls3ry5Yz9RPyF7jnUvxKAqmP2NHe6B7MwuxMeoMO466XB1lwvH6d3Uf7CKhn/+Fld1ET633EfA0t/iM/muZsEDQI2fAkieXogkSYyLD+PY2QrsDq3jP6QgCANau3ogLpeL4uJiPvzwQ44dO+ZZhTVu3DgMhraHTmw2G5GRkSiK+7RZRVGIiIjAZrMRGhrarFx09OV0qlartdkQ2UcffcT27dsJDw/nySefZMKECe3+QfsayT8YFEOzU3k7shfE4XSx93gJE28Kx8fQ/BRfXXPgOPUVTYc2oVcVIZmjMM14BPWmZCTl+n9fsn8IyqBEHKcyMU66C0mSSEqwsHXfeY6frbhuoBIE4cbUrgBy6SysAwcOtJgH8ZZvfOMbfOc738FgMLBjxw6eeOIJNm3aREhISLvv0Vpy+LaEhwe2XaiD7KFRGBrLCA8PZFBEIDnnK9v9nMwjNurtTlJvjWt2TfXBrVT861202nKMUfEEp3wT/+FTkOT2HRVfM2EWJRmvEOQoxDRoONND/FjzYRYnL1STkjy01Wt7oo0GEtE+bRNt1Lq+1j7tngO5dBbW+PHjO/wQq9VKUVERmqahKAqaplFcXIzVam1RrqCggKSkJKB5jyQ8PNxTbtq0aVitVk6dOsWUKVPaXY+yslpcLr3D9Q8PD6SkpKbD17VF9wujsaSAkpIaAnwUSirqKSquRm7HYYhbduYR6GdgUIjJUzdXXQV1H/0BOSIe3xmPoAwaTYMk0VBW3/46hY0GRaVkz1ZMRvffz6i4UHZm2bh3xtDrHtTYU200UIj2aZtoo9b1RvvIstTqB2+vnIVlsVhITEwkIyOD9PR0MjIySExMbDZ8BTBv3jzWr1/P3GTgEV8AACAASURBVLlzqaysZOvWrfz5z38G3MenREa65w2OHTvGhQsXGDq09U/EfZ1kjsR1Pgtdd2Exm3BqOtV1TdfMV36lBruTQzmlzEiKbrbx0Jm7FwDfmY8iB1uvd3nrdTL6oQ4ej/P0bvTk+5BklXHxFvafLOFccS2DI/vWJyBBEHqP187CWrVqFStWrGDNmjUEBQXxwgsvALBs2TKeeuopxo4dS3p6OocOHWLu3LkALF++nNhY987o3/3udxw9ehRZljEYDLz44ovNeiX9kRwUCZoDva7SsxektKqxzQCy/2QJDqeLqVetvnLm7kUOiel08LhEvSkZZ+5etPPZqIOTLi/nPV0mAoggCB7tCiCaphEVFcXjjz/e6Zwg8fHxzfZ1XPLaa695vlYU5br7Oy4FnIHk8lLeIixmd6Asq2psMyHUzouJo+Kjgzzfc9VXotlOYpyU3uV6qbFJYPTDkZOJOjgJc4APcVGBHDpdStqtcV2+vyAIA0O7lvEqisI777yDqooU6t1JDnLvHndVFTXbjd6aqlo72Xnl3DK6eeIoZ95+QEcdenOX6yUpBgzDbsaZtx/d4T5eZVxCGGcuVFNdL7JSCoLg1u59IOnp6bzzzjs9WZcbjuQfCrKKXl2Mr4+Kv0ltcynv7uPF6DpMHdU8cZTzzB7kYCtySPR1ruwYNSEZnHacZ/cD7l3pOpAldqULgnBRu7sUhw8fZt26dbz++ustJtEvTXQLHSPJMnJQ+OVj3c2mNnsgu7KLGHxV4ihXQzWa7TjG8WldTmd7iWIdjuQfiiNnJ4aEZIZEBWL2N3Iop4xbx3RtjqU36bqOS9c7fOqxIAgttTuALF68mMWLF7f4fnf9wrpRSUGRlzcTBpkoqrh+atuiS4mjZjVPHOXM2w+6jjqs68NXnnpJMoaEW2g6/DGuxhpkUyBJ8Rb2nijGqblQlf75C/jj3ef4eE8+L3w7GaOhfXtjBEG4tjZ/Czz//PMA3HXXXdx11104nU7P13fddZc4zr2LZLM7L4iu657d6Lp+7b0qu7KLkICpiS1XX0lBkcgXc3l0FzUhGXQN5+ndgHsepMGukXO+qluf4y26rvP5gQtU1TZxRAzFCUKXtRlAPvjgg2avf/3rXzd7vWPHju6t0Q1GDooAZxN6QxVhZl/sDo26RmeLcrqus/Noy8RRemMt2oVsDMMmd3tvULHEIofEeM7GGhUXgqq4k0z1R6cLqimudPfwdh0r7uXaCEL/12YAufrTcFuvhY65Mj+6ZyXWNSbS84tqKSyvb7n34+wB0F3dsvrqWtSbbsFVlIOrugSTUWXE4JB+m2QqM6sQoypz65goDueU0mBvGagFQWi/NgPI1Z9q23otdMylpbx6VdEVeUFazoNkHr124ijHmT1IgWHIYUN6pH6GeHcCsUu9kHHxFgrL6ymqaP/xKNdTVtXIYS/1Zpyai93HipgwPJwZ46Jpcro4lNM/e1KC0Fe0GUA0TWPnzp1kZmaSmZmJ0+ls9trlcnmjngOWFGABScFVXezJ+nd1D8Tl0tl9rGXiKN1eh3bhKOrQ7h++ukQODEOJGo4zZye6rpN08UTew13shRSU1vH823t5af1hCkrruqOqrTpyuoy6RifJoyNJiDETEujDbjGMJQhd0uYqLIvFwtNPP+15HRwc3Oz11edZCR0jyQpSUDiu6iL8TSo+BoXSq5bynrhO4ijn2YPg0jB04+qra1ETkrFvfwtXWT4RYUOwWvw4dLqUOTd3btL+fEktv3nnAEgSiizx+cELLE0Z3s21bi7zaCGBfgYSA6tx7N/OlJGj2LrvAnWNDvxN7cvmKAhCc20GkG3btnmjHjc0OSgCV1UxkiQRdo28IDuPXjtxlDN3L5J/KHL4sB6tn2HYzdi/WocjJxMlbAjjEsL4ZM85GuxOfH06djrB2cIafvvuQVRF4gf3TWDD9ly+OlLIPbfHt8hr0l3qGx0czClj5vhoHPs+RDt3hOTbR/LxHp39J0q4bVz3bL4UhBtN/1zMP8C0WMp7RQ/E4XSx90TLxFF6UwPO80d6dPjqEskUgBqb5B7GcrkYF29Bc+lk55W3ffEVcm3V/PqdAxgNMj+6fyJWiz+zJgyi3u5k97GOH9DZXntPlODUXNya4It2PguAiOqjRAT7sqsHnysIA50IIH2AHBQBjkb0hmosQc17IIdPl9Fgd4/dX8mZfwg0Z7duHmyNmpCMXl+JZjtOQowZPx+1Q6uxci5U8Zu/HsDPpLJi6UQiQ/wAGB4bTHSYP58fuNBTVeerrEKiQv2wVmeBriOZo3Ce2cOUxHCOna2gqk6c7yUInSECSB/gyY9+cSK9rtHpWWK6K/vi2H1c88yLzjN7kPyCUSLjW9yvJ6hDxoPBhDMnE0WWGTMslMOnS3G1Yxn3ifwKfvvuQYL8jKy4fyJhwb6e9yRJYub4aHJtNeQVVnd7vUsrGzh5rpLkMVE4T32FHDEMY9I89KpCbhnkRNdh73ExmS4InSECSB9waS+IXn15KW9ZdSMNdicHc8qYMjKy2dlNuqMR57nDqEMnIUne+SuUVCPq0Ek4zuxFdzYxLiGM6noHZwtbz5CWnVfOf60/RGigDz9cOrHZJshLbh0ThdEg90gvZGe2e4jq1mgHrvJzGG6ahho3ESQZS3kWg8L9e3T4TBAGMhFA+gAp0AKS3GIz4b6LY/ctVl+dOwyao8c2D16PISEZHA048w8xdpgFSaLVvRRZZ8r47/cOEx7syw+XTiQksHmiLFdDNVpRDn4mA1MTI9mZXUT9NXbhd5au62QeLWR4jBl/216QFQzxU5F9g1CiE3Gc2cOUkRGcOl9FeRuHWAqC0JIIIH2AJKtIgWHN94JUN7Iru5Aws4lhVySOAnCe2YvkG4QS1bNLX6+mRI9C8jXjzNlJgK+B+EHm686DHMwp5X/eP4w11I8f3jcBs3/zRGSu6hLq//4c9Rt+jquykFkTB9HkcJF5tLDb6nu2qAZbWT23jA7HmZOJOng8ksmd31mNn4JeXcRUqzvfidgTIggd57UAkpuby5IlS0hNTWXJkiXk5eW1KKNpGqtXryYlJYU5c+ZcM4PhmTNnGDdu3IDLUCgHReCqLibI34iqSJy+UE322YoWiaN0px1n/iHUuElIXj6SXJJl1PipOPMPodvrGBdv4WxRDRU19mbl9p0o4ZUPjhATHsD375tAoN9VwaOykPqNv0RvqgdZpunoJ8RFBREXFcjnBy502/E4X2UVoioSk4NK0RuqUYff6nnPEDcJJAVzyWHiogLFMJYgdILXfgOtXLmSpUuX8vHHH7N06VKeffbZFmU2btxIfn4+W7Zs4d133+Xll1/m/Pnznvc1TWPlypWe3OwDiRwUiauqEAkIDTKx+1gRug63XJ046lwWOO1eW311NcNNyeBy4sjd69mXcuXJtruPFfGHv2cRZw3k+9+Y0GznPIBWfo76jb8AzYFf2grU+FtwnNiObq9j1oRBXCit41Q3nParuVzszi5iXHwYSt4uJJ8A1NhxnvclUwBKzCjPMFZeYU23HM8iCDcSrwSQsrIysrOzSUtLAyAtLY3s7GzKy5vvI9i0aROLFi1ClmVCQ0NJSUlh8+bNnvf/+Mc/MnPmTOLi4rxRba+SzRHQ1IBuryXMbEJz6QyOCCD6isRRAM7cPUg+ASjWEb1Tz7A49zLYU5kMCvPHEuTjmQf5KsvG//7jKAkxZv7f4vH4mZpvMtRKcqnf+CuQZHy//mMUSyzGsXPAacdx4gumJEbi66N2y2T60dwKqusdTBtpxpm3HzVhKpLSvD6GYVPQa0qYGuWe/xDDWILQMV4JIDabjcjISBTFvRFOURQiIiKw2WwtykVHX94VbLVaKSx0j4kfP36c7du389BDD3mjyl7nWYl1xUT6LaOb9z50ZxPOswdRh05EknsnGZIkSRgSktFsJ9DrKkhKCONoXjkf7cjl9YxjjBwcwvcWjWuxQ91ZeIr6jBeRjL74ff1plGD337MSFodiHUFT1laMKkwbE8XeE8Vdzr2+82gh/iaVEfoZ0BwYbprWoowaNxFkBd/Cg9wUYxbDWILQQR07h6KXOBwOfvrTn/LLX/7SE4Q6w2IJ6PS14eGBnb62PZrkYZwH/PVqEgZHsyOrkHnThhEecnnPRN3JPdQ6GrGMn4FfD9enNY4pszm370N8Cg8wY2Iyn+2/wKsfHGbiiAiefnhKiyNJGnIPU/jP32AItGC9fyVqUPMjWepuTafo/RfxrzjOXXeMYuu+8xw8Xc49d9zUqfrVNzrYf6qU2ZNjUfPXI1kGETkq6Ro79gPRho7DcXYvsyfP4tW/Z1Hv1BliDbrmfbuqp/8NDQSijVrX19rHKwHEarVSVFSEpmkoioKmaRQXF2O1WluUKygoICkpCbjcIykpKSE/P5/HHnsMgOrqanRdp7a2lueee67d9Sgrq8Xl6vgEbXh4ICUlre936Cpd8wVJour8WaYkjWNoRAA4nc2e23DwS/DxpzYgjroerk/rApDDh1F56F9Ep8/E7G9kZFwoj3xtBNWVzecRnPkHafjk98hBURjnf58Kuw9cVXc9ZCRSYBilOzbg9/XRjIgN5qMdZ5g+JhK5E8e07Dhio8mhMcnqovHzYxhvvpfS0tprltVjJ+I8vZ8RvuVIEmz+6gx3z+j+zZne+DfU34k2al1vtI8sS61+8PbKEJbFYiExMZGMjAwAMjIySExMbHGS77x581i/fj0ul4vy8nK2bt1Kamoq0dHR7Nq1i23btrFt2zYefPBBFi9e3KHg0ddJigEpwIKruhgfo0JMRPO/NF1z4jy7H3XIBCS59zuOhpuScZXlI9cU8sJ3kvnJw1MwqM17Ho4ze2jY8jJySAx+C1Yg+wVf816SLGMcPQet8CRaSR4zJwyipLKR7NyOnbV1SeZR9/LnQTVZgOSe+L8OdcgEkFV8Cg6QOCSE3dnFIkmaILST11ZhrVq1inXr1pGamsq6detYvXo1AMuWLePIkSMApKenExMTw9y5c1m8eDHLly8nNrZ783z3Ze6VWNceh9cuZENTA4Zhk71cq2tTh00BScZ5KhOjQWkxPOQ4uYPGT9cghw/FL+2Hnv0X12MYeRsYTDRlbWHSiHAC/Qx81onJ9IoaO8fyKkgeFYnj1Fco0SORAyzXLS/5+KPEjMF5Zg9TR4ZTXNlAXhu76wVBcPPaR9n4+Phr7ut47bXXPF8riuIJLK158sknu7VufYUcFIHjzO5rvufM3QMGX5RBo71cq2uT/cwog0bhOL0T4833NHuvKfsz7NvXokSPxDf1P5AMPte5y2WS0Q/D8Ok4jn2Gz9TF3JYUzT93naW8uvGax59cz67sInRgmrUe/UQxholfb/MaQ/wUGvMPMsFSy1pZYvexIob20DyIIAwkYid6HyKbI8Feh97YfLxedzlx5O1HHTIeSek7yY8MCcnoNaVoRTme7zUd/hj79rdQYsfiO+977QoelxjHpIDLhSN7G7ePjwYdvjhU0KE6ZR4tZKg1iKCifaAaUeMmtXmNOmQCKCrquX2MHWZh97Hidh0S2V66rlPf6Oi2+wlCXyECSB9y5am8V9IKjoO9rtc2D16PGjcRFCPOi/nS7fv/gX3nO6hDJ+M79ykk1djGHZqTzVEog8fhyP6MsACFMcMs/OtQAU6tfWmTzxfXcq64lmmjLDhO73bv1jf6tnmdZPRFjRnrPuJ9ZBgVNXZyumEzI7iDxx83ZvPIc1uwlfV86l5B8CYRQPoQyRwBtAwgzjN7wWBCjRnTG9W6Lsnoixo3Aefp3ZRte5umvR+g3nQrptmPt9i0117GsXPRG2tw5uxk1oRBVNU2tXpg45UyjxaiyBKTA2zQVI9heMu9H9ejxk9Fr68kyVyFUZW7bU/I5t357Mouwu5wsebvWdgdWrfcVxD6AhFA+hA5MByQcFVf/uWluzSceftQB4/r8Cd6bzAkJKPba6nK/DuGxJmYZj7apU2OSnQicmgMTVlbGDsslNAgn3btTHe5dHZmFzFmaCjq2V1I/iEo0aPa/Vx18DhQDMj5+0hKCGPv8WI0V/t6PtdzNLec9z4/zeSREfz0kakUlNSx7uMTYpWXMGCIANKHSKoRyT+k2UosrfAkemMN6tC+sfrqakrsGOTIBMzJC/GZ/mCX85NIkoRxzFxc5efRC49z+7hojuZVUFTe+jlVJ/IrqKixM314ANq5IxgSkjt02KRk9HWn7c3dy9SR7lwnx/MrO/1zlFQ28OqGLKLD/Hl49mBGBVayYFocO7IK+fKwre0bCEI/IAJIH+POj355CMt5Zo97MnhwUi/W6vokWcU//Rksd3yz23Kzqwm3IJkCcWR9wm3jolFkiX8dbH0y/aujhZiMCon6KdBdqNc4uqTN58ZPQa+vZLR/OSajwu7szg1j2R0av//gCLoO/37XaPTPX6HgTz9mfmwVo+JC+PMnJ8kvEkuFhf5PBJA+Rg6KRL/YA9FdLpy5+1Bjk5DU9q9m6u8k1Yhh1CycZw8SpFUy4aYwth+x4XBee/7A7tDYd6KEySMi0M9kIofFoYQO6vBz3cNYRji7lwk3hXsSenWEruu89c/jnC+u5bGvjyakIBPNdgLF34z9sz/y2O1h+JtU1vw9q1uTZwlCbxABpI+RzRHojTXoTfVoRafQG6r63OorbzCMuuNirpCtzJowiNoGB3uuk7v84KlSGps0bhui4yo926HJ8ytJBhPq4IvDWIlh1NudZJ3p2G74T/aeZ2d2EQtnDGNMuIZ993qU2CSiH/oVkqygbP8Dj9+ZQGllI2/+85iYDxH6NRFA+hjpiqW8zty9oBhQY/vm8FVPkv2CUeOn4jjxJSOsPkSG+vH5gWsPY2UeLSQk0IfY2iMgKajxUzv9XDV+KnpDNSN8SvA3qR1ajXXsbAV/25bDxOHhzL8llsZ/vQ6ygum2hzAER2Ca/TiuShsxOe9xz+1D2XeihK17z7d9Y0Hoo0QA6WPkS0t5Kwtx5u5FjR3brr0MA5FxzFxwNOI8sZ1Z46PJuVDFueLmmyyr65rIOlPOLYnutLVK7Fhk387vIlcHJ4FqRM/by+SRERw4VdqupbdlVY384e9ZRIb68q07E9Gyt6HZTmBKXooc4D7zTY0Zjc+UxThz93KHTxbjE8L422c5nL7QPXtOBMHbRADpY+QgdwBx5GSi11X02dVX3qCEx6FEDafp6Cckj47EoMotlvTuPlaES9e5LaIKvb6y08NXl0iqD+rg8Thz9zJlRBh2h8bh09fO+35Jk0Pj9x8eQXO5+Pe7x+JjL/cMXanDpzcra0iahzpsCk173+eRiS5CAn34w4YsahvETnWh/xEBpI+RVB8k/xC0/EMgq6hDxvd2lXqVYcwc9JpSTMVZTBkZwVdHC2mwX558zjxaSGxEAObi/WD0c0+Ed5EaPwW9sYYEtRCzv7HV1Vi6rrP24xOcLaxhWdpookJ93UNXknvo6uqVaZIkYbr9W8ghg9C/fI1/nxtFdV0Tr23M7tbjUwTBG0QA6YMu9UKUmNFIRr9erk3vUuMmIgVYcGRtYeaEQdibNHZd/IVuK6sj11bD9JEhOHP3YYif0i2bLdXYJDCY0HL3cPPICA6dLmsWtK60bf8FvsoqJH36UMbfFIbj6KcXh67u8wxdXU0y+OA79ynQdSwH3+T+mUM4cqaMjzLPdrnuguBNIoD0QZfOxDLcgKuvribJCsYxKWi2EwwxljM4IoDPDlxA13UyjxYhSXCz/3nQmq6ZtrZTz1SNqEPG48zdx5SRYTg1FwdOlbQodyK/gr9+eorxCWEsmBaHq7r48tDViNtafYYcFIHvHd/BVXaeKdVbmJoYwd+/PMOxsxXd8jMIgjeIANIHyWFD3GdfDZnQ21XpEwwjZoDqgyNrKzMnDOJccS2nC6rZebSQUUNCMJ7bjRQUiRyZ0G3PVIdNQbfXMoQLWIJM7D7WfAlxebV70jws2JdH00Yhobc6dHXNZwxOwjj5Lpynd/LA4HyiQv34338cpbLW3m0/hyD0JBFA+iBD4iwC7vsNko9/b1elT5B8/DGMmI7z9E6mDDVhMiqs3Xyc0qpGbkvwQSs4hmH4rd22Ex5wH1xpMKGd2cOUxAiO5pZ7JrodTo1XPszC7nTx5N1j8TOp7Rq6uhbjhDTUuEm49r7Hk9OMNDY5eXXD0S6fwyUI3iACSB8kyXKbGfxuNMYxc8CloeR8QfLoKM6X1GE0yIzmFACGhFu79XnuYawJOPL2MWWEBc2ls++EO93t21tOkmur5tE7RxEd5t+hoasWz5FkTDMfRTZHErD3Tb41M5KT5yr58Ivcbv15BKEneC2A5ObmsmTJElJTU1myZAl5eXktymiaxurVq0lJSWHOnDnNMhi+//77LFiwgPT0dBYsWMDatWu9VXWhD/DkCjn2GTOT3IsMJiaEwelMFOsI5KDwbn+mIX4K2OuIduYTGerH7mPFfH6wgO2HbaTdGsekEeHouqvDQ1dXk4y++M59Cl1zMOrsu8wcG86mnWc52M5j7AWht3gtgKxcuZKlS5fy8ccfs3TpUp599tkWZTZu3Eh+fj5btmzh3Xff5eWXX+b8efdO3dTUVP7xj3+wYcMG3nnnHd58802OHz/ureoLfYBxbCp6QzVR1VksSxvFPWNkXFWFqDd1b+/jEiVmDBh93fnSEyM4fraCv3xykqR4CwunDwXo9NDV1eRgK76zvo2rNI97/XYyONyf1zOyKa1s6K4fRxC6nVcCSFlZGdnZ2aSlpQGQlpZGdnY25eXNzxnatGkTixYtQpZlQkNDSUlJYfPmzQAEBAR4Pt01NjbicDi6dcxb6PuU6ETkEHeukFtGR+Jvcx/10lOr1STFgBo3EefFYSwdsJhNPLZgFLIsXTF0NbbDQ1fXosZNwDgxHe3UDp5MKsel6/xhQxYOp5gPEfqmzqWN6yCbzUZkZCSK4k40pCgKERER2Gw2QkNDm5WLjo72vLZarRQWFnpef/rpp/zud78jPz+f//zP/2TEiBEdqofF0vl5hfDwwE5fe6PwRhtV37qA0o/+QEDNaepzd+M/YgoRgyJ77Hn1E26n8OQObjIW8cMHJnPT4GCiLP7ougvb5j8hKSqDFj6JGtT28SntaR993gMUVZ+HQ+/xgzlP8rOMcj7Zf4EH72x/cqz+TPx/1rq+1j5eCSDdZfbs2cyePZuCggKWL1/OjBkzGDZsWLuvLyurxeXq+G7f8PBASkpE/obWeKuN9MgJSKZAija8jN5Qi2vw1B59rh4wDIx+lB74nJGzHgOXi5KSGpqyPsGen41pxiNU2I3QRh060j7ytEeQin9G2IE3SBl1Px9+nsO4YaEMChvYq/LE/2et6432kWWp1Q/eXhnCslqtFBUVoWnuQ+k0TaO4uBir1dqiXEHB5RNXbTYbUVFRLe4XHR3N2LFj+fzzz3u03kLfI6lGDIkz0RuqkHyDUGJG9+zzFBU1bhLOvAPoziaAbh+6avFMH3/3pLrDTprzY/yN8OctIhWu0Pd4JYBYLBYSExPJyMgAICMjg8TExGbDVwDz5s1j/fr1uFwuysvL2bp1K6mpqQCcPn3aU668vJxdu3YxfPhwb1Rf6GMMo+5wz30Mn96l/Ovtfl78zeBoQDt/9KpVVw/32DycEjrInV++9Az/PjSH4/mV7OxkhkRB6CleG8JatWoVK1asYM2aNQQFBfHCCy8AsGzZMp566inGjh1Leno6hw4dYu7cuQAsX76c2NhYAN5991127NiBqqrous4DDzzA9OnTr/s8YeCS/UPwX/QLJP9grzxPGTQKfPxxnNmNq7bUvepqxiNdWnXVHoZhN6MlziLy2OfcGhnFu5+eYly8BT+ToUefKwjtJek3UL9YzIH0nIHeRo1fvIEjZxego1hH4Dvv/3Wo99HZ9tGbGqh77xmcusKK8ylMnzCEB+Z2bPFIfzHQ/w111Q07ByII/Z06bAo47T0+dHU1yeiL6fZvodQV8+2403y2/wJ5hdVeebYgtEUEEEFoByU6ESVmDKYZD/f40NXV1EGjMCTOJKF6N6MCKnj74xOd6kkLQncTAUQQ2kGSFfzmf999vEkv8Jm6BMk/lAfNOzlvq+BfBy+0fZEg9DARQAShH5CMvphmPIxPQwn3R53gvX+doaquqberJdzgRAARhH5CjRmDYeTtjG/aT5SrkPWf5fR2lYQbnAgggtCP+NzyDWT/EB617GZ31gVO5IsMhkLvEQFEEPqRS0NZAU2l3B18lLe3nMSpicMWhd4hAogg9DNq7FgMI2Zwq3wIteIsn+w519tVEm5QIoAIQj/kk+weynokdBcZO3Ioq2rs7SoJNyARQAShH5KMfphue4gQrYw5xkO88+mp3q6ScAMSAUQQ+il1cBLq8Nu4w+cIJaePcUikwBW8TAQQQejHTMnfQPIz82/mnfz1k2PYHVpvV0m4gYgAIgj9mOTjj++Mh4ignElNu/ko82yX7udy6ZwpqObkuUryi2oorqinuq6JJocm8pEILfSrjISCILSkDh6POnwac05+xUt7hpA8OhKrpf3ZC126Ts75KvYcL2bviWKqaq+9w12WJExGBZOPgo9BwWRU3a+NF7/2UbCG+jFzwiBURXw2vRGIACIIA4ApeSmOc1nc59rBO1vi+N43JrV6YrCu65wuqGbPMXfQqKixoyoySfEWJo8IJ8DPgL1Jo9Hzx3ndr2vqmzyvaxscfHHIxrfuTGRIVN/K3y10PxFABGEAkHz88ZvxMNaPXyK2+At2H4tl6qjIZmV0XSfXVsOe40XsPV5MWbUdVZEYO8zCopnxjEsIw9en9V8Juq6DswndXnfxT+3F/zaCvY78KlhzWOW5t/YyP3kIC26Nw6CK3shA5bUAkpuby4oVK6isrCQ4OJgXXniBuLi4ZmU0TeP555/nyy+/RJIkHnvsMRYtWgTAK6+8wqZNaVaArAAAFV5JREFUm5BlGYPBwPe+9z1uu63781ELQn+lDhmPetOtzDm1k//dlsnYYWn4+iicLaphz7Fi9hwvprSqEUWWGDM0lLtmDGN8Qjh+JhXd2YSr2objQiGu6iL0hhp3cGisg6Z6z9e6vQ5czuvWIRJYPSiBj6RZZHyVx4GTJTxyZyJDrUHeawjBa7yWkfDf/u3fuOeee0hPT2fDhg28//77rF27tlmZv//972zcuJHXXnuNyspKFi5cyF/+8hdiYmL48ssvmTx5Mr6+vhw/fpwHHniA7du3YzKZ2l0HkZGw54g2ap232ke311H11x9TWKfwj8D7KK/TKK5sQJElRg8J5tahRkZZmjA2lOKqLMRV5f6j15YDV/y/YTAh+fgj+fgh+QRc/Nr9hyu+9vwxBSAZ/XDm7adxxzrQXZQmLOD3R4KpqnUwb+pg0qfHYVCvn8Ne/BtqXV/MSOiVAFJWVkZqaiq7du1CURQ0TWPq1Kls2bKF0NDLyXkee+wx7r77bubNmwfAz372M6Kjo3n00Ueb3U/XdSZPnsxHH31EVFRUB+ohAkhPEW3UOm+2j/PsARo+/m/22ONRg0IY6ldPsKsSaotBu6L3YPBFDo5CNl/6E4kcbEU2RyIZ2v/B7Gqu2jIaP/8/tIJjMGgsGa4ZfHK0BqvFj0fmJxI/yHzN68S/odb1xQDilSEsm81GZGTk/2/v3qOirPc9jr/nwjAwCiMIOshFoQRKVBSznVqmhBYUkLnxeLqcfdq29zoVXk4WnVVqmrtor7Xd6rFTp06rLNM0i0VAYZins01MkLzlJSOMYlBAREJuc3nOHxSByAAjMBjf11r+McPzzPObr8+az/x+8zy/HxpNy7cPjUaDv78/5eXl7QKkvLycgICA1scmk4mzZ892eL2MjAyCg4N7FB5CDBbakGi0193ClG/3gUWDWvFHbRyJKmRCu8BQeXj1ydK86iG+eMQvx/L1bpq+3E6C9jum3jaPl7+y8Zd3DhI3JYjkGaHo3DrvjYhrwzX3I/qBAwdYv349b7zxRo/3dZSkXfHzkytKuiI1cqw/66PctxjrT/ej9RqOSu2iD2r/e2mOuonKzA34Hd3MC+Om8ZH1FrIO/MCxkmpSU6K5YYxvu12uhXPoy2Pl7Dtazg1jfIgKG45puKFPgvhKBlp9+iVATCYT586dw2aztQ5hVVRUYDKZOmxnNpsZP3480LFH8tVXX7F8+XJefvllQkNDe9wOGcLqO1Ijx1xTH084X9/Px7ycN27xT6N8lU1jUSZxnseZEPt7Xi2wkfafe5kdE8i8W8Nw12muiXPoh4o6Xnq7ELui8FlhyyzIxiE6IoKHER5sJCJ4GP7DPPokUAbtEJavry+RkZFkZWWRmJhIVlYWkZGR7YavAObOncuOHTuIi4ujpqaGvLw8tmzZAsCRI0dYunQpGzZs4MYbb+yPZgsheoFKrcV9ciLa4PE07nkN/6JXWXnj7WTWT+LTwh858u15/nBXxID7dn25+kYLmz44SpD+Eo/N0NNoDONEtZZTP1zkxPcX2H/8HNB/gTIQ9NtVWMXFxaSlpVFbW4uXlxfp6emEhoayaNEiUlNTiYqKwmazsXr1ar744gsAFi1aREpKCgDz5s2jrKyMESN+vbb9pZdeIjw8vNttkB5I35EaOSb1aaFYm2kq2InlaC4qrxFURC7g1fwGKmsaCQv0JthvCKEBXoSN8mbEAPrgtSsKG98/QumZH3l2xC40jRcBUBl80AREogmI5IJhDCcq4WTpBU6V1rSuWd82UMYGGdHrtDRbbVgsdiw2O80WGxabHYvFTrPVjsVqx2K1YbG2PG622rFa7YwJMjI5zBe1uv9qMiCuwhooJED6jtTIMalPe1bzCRr/93WUS9Woo+5ijzWa4nP1nPr+Ao3NLRNCGvRaxgR4ERbgTWiAF2NMXgzxcHNJezO/KCH7H6dZFfg5Q5qr8Ij9N+x157GZT2Azn0RpbPm/VXmPQBsQidoUyXnPEE6ds3UIFMcUvFQN+GtqGaG52OZfLXV2d/Z5x/P75Bl4eer69g3/TAKkDQmQviM1ckzq05HSXE/jvnexfrMXtW8wvtOTqfMM4Wyjju/MtXxnvkixuRZz5aXWO1RG+ni29FACvAgN8CbQ34BG3bd3uh/97jzrtx9iWcB+ghq/xWNOKtqQ6F/fh2LHXl2Grew4VvNxbOWnwNKywJfaJ6i1h1KlD+Lbc83YFAV3tR1PSw2eTVXoGytxb6jA7VIFmroKVNY2i4Np3X++tHokjaXHsDQ3sdM2m9l3z2VskLFP3zdIgLQjAdJ3pEaOSX06ZzlTRNPezSj1NQCovEeiDYho+eA1RdCkMXCmvJZic21rsNTWWwDQadWMC/XlX+6M6JPeSWVNA6vfLOAewyFuVopwv/mf0I2f43AfxW7DXlmC1XyipYdy9jTYLKBSo/YNQrE0otRWgvLrWvYqg09LUBhH/hwYJtRGEyrDsNZhPKOugZLNz6O+aCazYTJ+N9/NnKnBfTrMJwHShgRI35EaOSb1cUyx2/G2n6fqeCFW88n23+KHBaAxRaIJiEAbEAnuBs5fbKTYXMu3ZRf5/FAZfkYPlsyfgJ/Ro9fa1Gyx8Ze3DxJ86Sjz3f+BW+RM3Kc/1OMPbMXajK2iGFvZcWznvkWlH/JzWPwcFN4jUOm6bref31AqzJXU7X4VSr/iy6YwTpni+UNCFJ76vhnakwBpQwKk70iNHJP6dK1tjRS7DXvV9+2/xVubgF+GhVp6KFpTOKcrLGzceQSNWkXqfRMIDbj6ebcUReGN7BOcPXmIx713ow2IwOPOpajUrrt17pf6KIqdpoMZWIoyKbH68aFqDg8l38Tokb0/35gESBsSIH1HauSY1Kdrjmqk2K3YK34eFio/+euwECo0ARHURqWw7mMzF+uaeeSeG5k01u+q2rLnqzI+/rSANN9c3Ica8Ux8pmUeMBe6vD6W4gM07HmNWpuO1+tmcdvtU5kZPapXh7QkQNqQAOk7UiPHpD5d60mNFJsFW8V32MqO03zsU1DsKFP+mQ1FHpSYa1kw+3rumBLkVDuKyy6y/t18lg/LxehmwZC8ArWXv1Ov1ZuuVB9b5Rnqc9djqf+Jt3+ahsf1N/Hg3HD0ut7pKXUVIDJRvxDimqPSuKE1heMek4zhvjVofINR7fsflo4qYsp13mzdfZp3P/2mx18Yay8180rGEf7o9X8YqcUj7vEBER6d0fiNxnDvStz9Q/jXoZ9jLNnFmjcLKKus65/jr1q1alW/HGkAaGhoxpn+lsHgTn19d67hHrykRo5JfbrmbI1UOk+0198CqLCe2M0EtxIMwRFkHarhh4o6Jl4/vFtL7Nrsdja+f5jpDbuJ0n6P/rY/4jY6usv9+ktn9VG56XG77nfY66oJqy3AaKvmvwrVGL08CPK/urv7VSoVng7uOZEeiBDimqdSa3CPScYjIQ1sVmaY32JZVAWHv63kpXeLunUT387Pv8NUuY+bdafRRd+N29hp/dDy3qHS6tDP/CPuU1O4UXOGpd65vJ9TyJsfn8RitfXZcSVAhBC/GVpTOIZ5q9EGTySk7BOeD/uSi+erWLu5EHPVpU73KzxZwY9Fe0n0PIh2TAy6mOR+bHXvUKlU6CbciefcJfhp6vgP308o/fowazcf5Kc+6v1KgAghflNU+iHo73gM9+kPYbhYwrM+OQTbSvnL2wc5VXqhw/bmqkt8/Mle/mXoXtTDx6C/fREq1bX70agNnoBn0jPoDQaWeO8i3HqCipqGPjnWtVslIYTohEqlQnfD7Xgmr0Jr8OYh3SckGQ6ybttB8r/+dZG6hiYrb36wnz945KH1HIrn3MWotO4ubHnv0AwbhSFpBVrTWBLYQ4iq48J8veGaW1BKCCG6S+MzCs/kFTTt38bU458xxqec/86uo+riROJ/F8Lm7CMkWXMY6m7DcOdS1J59P79Uf1Hph+Bx179j/a4AzfCQPjmGBIgQ4jdNpdWhn/4gmsAb8f/8DZ4als17+ytZc3ICs+uzCNRV4xm7GI1vsKub2utUai1u1/2uz15fAkQIMSi4jZ6MZvgYGva8yv3KF5gbvyZAV4Pu5gVoQya6unnXJPkNRAgxaKiH+OAZ/xS6mHsxudWiiZiJLsrx7Lqic9IDEUIMKiq1GvdJ96C7YRa4GwbMqofXon7rgZSUlJCSksKcOXNISUnhzJkzHbax2Ww899xzxMbGcscdd7Bjx47Wv+3du5d7772XcePGkZ6e3l/NFkL8Rqn0QyQ8rlK/BcjKlStZuHAhubm5LFy4kBUrVnTY5qOPPqK0tJRdu3bx3nvvsXHjRn788UcAgoKCWLt2LQ8//HB/NVkIIYQD/RIg58+f5/jx4yQkJACQkJDA8ePHqa6ubrddTk4O8+fPR61W4+PjQ2xsLJ988gkAISEhREZGotXKqJsQQgwE/fJpXF5ezogRI9BoNABoNBr8/f0pLy/Hx8en3XYBAQGtj00mE2fP9t4NMI6mJe6Kn9/VTUo2GEiNHJP6dE1q5NhAq8+g+jov64H0HamRY1KfrkmNHHNFfQbEeiAmk4lz585hs7XMCmmz2aioqMBkMnXYzmw2tz4uLy9n5MiR/dFEIYQQPdQvAeLr60tkZCRZWVkAZGVlERkZ2W74CmDu3Lns2LEDu91OdXU1eXl5zJkj12gLIcRA1G9DWKtWrSItLY2XX34ZLy+v1ktxFy1aRGpqKlFRUSQmJnL48GHi4uIAePTRRwkKalmWsrCwkGXLllFXV4eiKGRnZ7N27VpmzJjR7Tao1c5fsnc1+w4WUiPHpD5dkxo51t/16ep4g2pNdCGEEL1HpjIRQgjhFAkQIYQQTpEAEUII4RQJECGEEE6RABFCCOEUCRAhhBBOkQARQgjhFAkQIYQQTpEAEUII4ZRBNRuvM0pKSkhLS6Ompgaj0Uh6ejqjR492dbMGjFmzZqHT6XB3dwfgiSee6NH0Mr816enp5ObmUlZWxkcffcTYsWMBOY/a6qxGci61uHDhAk8++SSlpaXodDpCQkJYvXo1Pj4+HDp0iBUrVtDU1MSoUaP461//iq+vr+saqwiHHnjgASUjI0NRFEXJyMhQHnjgARe3aGC5/fbblVOnTrm6GQNGQUGBYjabO9RFzqNfdVYjOZdaXLhwQdm/f3/r4xdffFF5+umnFZvNpsTGxioFBQWKoijKpk2blLS0NFc1U1EURZEhLAe6u5KiEL+IiYnpsEyBnEftXalG4ldGo5GpU6e2Pp44cSJms5ljx47h7u5OTEwMAAsWLGhdsdVVZAjLge6upDjYPfHEEyiKwuTJk1m2bBleXl6ubtKAIudR98m51J7dbmfr1q3MmjWrw4qtPj4+2O321mFRV5AeiLgqW7ZsITMzk507d6IoCqtXr3Z1k8Q1Ss6ljtasWYOnpyf333+/q5tyRRIgDnR3JcXB7Jda6HQ6Fi5cSFFRkYtbNPDIedQ9ci61l56ezvfff8/f//531Gp1hxVbq6urUavVLut9gASIQ91dSXGwqq+v56efWtZoVhSFnJwcIiMjXdyqgUfOo67JudTe3/72N44dO8amTZvQ6XQAjBs3jsbGRgoLCwHYtm0bc+fOdWUzZUGprhQXF5OWlkZtbW3rSoqhoaGubtaA8MMPP/D4449js9mw2+2EhYXxzDPP4O/v7+qmuczzzz/Prl27qKqqYtiwYRiNRrKzs+U8auNKNXrllVfkXPrZ6dOnSUhIYPTo0ej1egACAwPZtGkTRUVFrFy5st1lvMOHD3dZWyVAhBBCOEWGsIQQQjhFAkQIIYRTJECEEEI4RQJECCGEUyRAhBBCOEUCRAgXio+P58svv3R1M4RwigSIEG08/PDDrF+/vsPzeXl5TJs2DavV2qvHy87ObjdxXm/54IMPCA8P57XXXmv3/K233iqBJXqNBIgQbSQnJ5OZmcnlt0dlZmZy9913o9V2f/7R3g6bnjIajbz++uvU1dW5tB3it0sCRIg2YmNjqampaZ0uAuDixYvs2bOHpKQkjhw5QkpKCjExMUyfPp3Vq1fT3Nzcum14eDhbtmwhLi6OuLg4nnvuOV588cV2x/jzn//Mm2++CbQsorRv3z4ANm7cyOLFi3nyySeJjo4mPj6eo0ePtu739ddfk5SURHR0NKmpqSxZsoR169Z1+l5CQ0OJjo5uPZYQvU0CRIg29Ho9d955JxkZGa3Pffzxx4SGhhIREYFarebpp59m//79bNu2jfz8fN599912r5GXl8f27dvJyckhOTmZrKws7HY70DIBXn5+fuvaIJf77LPPiI+Pp7CwkFmzZrFmzRoAmpubeeyxx0hOTubAgQMkJCSQl5fX5ftZvHgxb731FjU1Nc6WRIhOSYAIcZmkpCRyc3NpamoCICMjg+TkZKBlQruJEyei1WoJDAwkJSWFgoKCdvs/8sgjGI1G9Ho948ePZ+jQoeTn5wOQk5PDTTfd1On8RZMnT+a2225Do9GQmJjIyZMnATh8+DBWq5UHH3wQNzc34uLiiIqK6vK9REZGcsstt3T4LUSI3iABIsRlYmJiGDZsGHl5eZSWlnL06NHWHkNJSQl/+tOfmDZtGpMmTWLdunVcuHCh3f6XT9P+y+8q0PJbSmJiYqfHbhsser2epqYmrFYrFRUVjBgxApVK1elxOpOamsrWrVupqqrq1vZCdJcEiBBXkJiYSEZGBpmZmUyfPr31g33VqlWEhoaSm5tLUVERS5cu7fCDe9sPeYB77rmH3bt3c/LkSYqLi4mNje1xe/z8/Dh37ly7Y5WXl3dr37CwMOLi4njllVd6fFwhHJEAEeIKkpKSyM/PZ/v27SQlJbU+f+nSJQwGAwaDgeLiYrZu3drla40cOZKoqCiWL19OXFxc6xTdPTFx4kQ0Gg3vvPMOVquVvLy8dj+wd+XRRx9l586drWtuCNEbJECEuILAwECio6NpaGhg9uzZrc8/9dRTZGVlMWnSJJ599lnuuuuubr1eUlIS33zzjcPhK0d0Oh0bN27k/fffZ8qUKWRmZjJz5szWxYa6EhQURGJiIvX19U4dX4grkfVAhOgHBQUFLF++nD179nQY4nLW/PnzWbBgAfPmzeuV1xOip6QHIkQfs1gsbN68mfvuu++qwuPAgQNUVlZitVr58MMPOXXqFDNmzOjFlgrRM92/rVYI0WPFxcXMmzePiIgIXnjhhat6rZKSEpYsWUJDQwOBgYFs2LBhUC75KgYOGcISQgjhFBnCEkII4RQJECGEEE6RABFCCOEUCRAhhBBOkQARQgjhFAkQIYQQTvl/hnMDTqA+XYUAAAAASUVORK5CYII=) **Proposed Features of Geomstats**:1. 4 Dimensional matrix logarithm `log` , matrix exponential `exp` functions2. `log-normal` distribution sampler on Space of SPD matrices3. `Shrinkage Estimator` for Mean of SPD matrices under Log-Euclidean metric **References**(1) [W. James and C. M. Stein, “Estimation with Quadratic Loss,” in Proc.Fourth Berkeley Symp. Math. Stat. Probab., pp. 361–380, 1961. ](http://www.stat.yale.edu/~hz68/619/Stein-1961.pdf) (2) [Efron, B. & Morris, C. (1973b), ‘Stein’s estimation rule and its competitors: An empiricalBayes approach’, Journal of the American Statistical Association 68(341), 117–130](https://www.jstor.org/stable/2284155?seq=1) (3) [Schwartzman, A. (2016), ‘Lognormal distributions and geometric averages of symmetricpositive definite matrices’](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5222531/:~:text=Analogously%2C%20a%20random%20symmetric%20positive,%2Dmatrix%2Dvariate%20lognormal%20distribution.)(4) [Yang, Chun-Hao, Hani Doss, and B.C. Vemuri.(2020),An Empirical Bayes Approach to ShrinkageEstimation on the Manifold of SymmetricPositive-Definite Matrices ](https://arxiv.org/pdf/2007.02153.pdf) ###Code ###Output _____no_output_____
Exploring_mesto_ua_dataset.ipynb
###Markdown Saving labeler encoders ###Code label_encoders = { 'street': LabelEncoder().fit(df['Адрес'].fillna('-1').str.title()), 'planning': LabelEncoder().fit(df['Планировка'].fillna('-1')), 'building_type': LabelEncoder().fit(df['Тип здания'].fillna('-1')), 'wall_type': LabelEncoder().fit(df['Тип стен'].fillna('-1')), 'window_type': LabelEncoder().fit(df['Тип окон'].fillna('-1')), 'heating_type': LabelEncoder().fit(df['Тип отопления'].fillna('-1')), } ###Output _____no_output_____ ###Markdown Encoding columns with categoric values and nan to -1 ###Code df['Адрес'] = label_encoders['street'].transform(df['Адрес'].fillna('-1').str.title()) df['Планировка'] = label_encoders['planning'].transform(df['Планировка'].fillna('-1')) df['Тип здания'] = label_encoders['building_type'].transform(df['Тип здания'].fillna('-1')) df['Тип стен'] = label_encoders['wall_type'].transform(df['Тип стен'].fillna('-1')) df['Тип окон'] = label_encoders['window_type'].transform(df['Тип окон'].fillna('-1')) df['Тип отопления'] = label_encoders['heating_type'].transform(df['Тип отопления'].fillna('-1')) df['Цена'] = df['Цена'].fillna('-1') ###Output _____no_output_____ ###Markdown Getting only usd and hrn prices ###Code df = df[(df['Цена'].str.contains('грн'))|(df['Цена'].str.contains('\$'))].copy() ###Output _____no_output_____ ###Markdown Getting the currency and removing it from the price ###Code df['Валюта'] = df['Цена'].apply(get_currency) label_encoders['currency'] = LabelEncoder().fit(df['Валюта'].fillna('-1')) df['Цена'] = df['Цена'].apply(lambda x: int(str(x).split()[0])) df['Валюта'] = label_encoders['currency'].transform(df['Валюта']) ###Output _____no_output_____ ###Markdown Filling the numeric columns with -1 and changing the type to int ###Code df['Этаж'] = df['Этаж'].fillna('-1') df['Всего этажей'] = df['Всего этажей'].fillna('-1') df['Этаж'] = df['Этаж'].astype('int32') # changing the type from str to int df['Всего этажей'] = df['Всего этажей'].astype('int32') ###Output _____no_output_____ ###Markdown Filling area columns with -1 and changing the type to int ###Code df['Площадь'] = df['Площадь'].fillna('-1') df['Площадь жилая'] = df['Площадь жилая'].fillna('-1') df['Площадь кухни'] = df['Площадь кухни'].fillna('-1') df['Площадь'] = df['Площадь'].apply(change_type) df['Площадь кухни'] = df['Площадь кухни'].apply(change_type) df['Площадь жилая'] = df['Площадь жилая'].apply(change_type) df df[(df.isna().any(axis=1))] # checking if there are nan values left label_encoders['currency'].classes_ # making sure that currency labels are only 2 df.info() # checking dtypes df = df.reset_index().drop('index', axis=1) almost_garbage = [] uncertain = [] for i in range(0, len(df)): incorrects = [x for x in df.iloc[i] if x == -1] if len(incorrects) > 3: almost_garbage.append(i) elif len(incorrects) < 2 and len(incorrects) != 0: uncertain.append(i) label_encoders['currency'].classes_ df[df['Тип стен']==0] def extract_district(x): import re x = x[7:] values = [el for el in x.split('/')] for el in values: if re.match('\w+\-rajon', el): return re.findall('\w+\-rajon', el)[0] return '-1' extract_district('https://kiev.mesto.ua/sale/desnyanskij-rajon/mikrorajon-miloslavichi/ulitsa-nikolaya-zakrevskogo/15154679.html?tar') # (lambda x: x[7:].split('/')[3]) df['Район'] = df['URL'].apply(extract_district) df label_encoders['district'] = LabelEncoder().fit(df['Район']) label_encoders['district'].classes_ df['Район'] = label_encoders['district'].transform(df['Район']) df df.columns = ['street', 'rooms', 'total_area', 'living_area', 'kitchen_area', 'floor', 'floor_count', 'planning', 'building_type', 'wall_type', 'window_type', 'heating_type', 'price', 'description', 'URL', 'currency', 'district'] df hryvnya = df[df['currency']==1].copy() # df.drop(df[df['currency']==1].index) hryvnya hryvnya['price'] = hryvnya['price'] / 24.6 hryvnya['currency'] = 0 hryvnya['price'] = hryvnya['price'].astype('int64') hryvnya df = df.drop(df[df['currency']==1].index) df = df.append(hryvnya).reset_index() df df = df.drop('index', axis=1) df df = df.rename(columns={'price': 'price_usd'}) df = df.drop('currency', axis=1) df clustering_df = df.copy() clustering_df.head() clustering_df['desc_len'] = clustering_df['description'].apply(lambda x: len([el for el in x.split() if len(el)>3])) clustering_df['ad_completion_in_%'] = clustering_df.apply(lambda x: int(abs((list(x.values).count(-1)/15)*100 - 100)), axis=1) # feature engineering the ad completion and description attributes clustering_df def check_unique(): from nltk.tokenize import word_tokenize import string from collections import Counter all_words = [] for val in clustering_df['description']: all_words.extend([str(x).lower() for x in word_tokenize(val) if x not in string.punctuation and len(x)>2]) return pd.DataFrame(Counter(all_words).most_common(), index=range(0, len(Counter(all_words).most_common()))).rename(columns={0: 'word', 1:'quantity'}) #words = check_unique() #words.head() from sklearn.cluster import KMeans from sklearn.model_selection import train_test_split kmeans = KMeans(n_clusters=3, init='random', n_init=20, max_iter=400, random_state=0).fit(clustering_df.drop(['URL', 'description'], axis=1)) predictions = kmeans.predict(clustering_df.drop(['URL', 'description'], axis=1)) predictions clustering_df['targets_params'] = predictions clustering_df for key in label_encoders.keys(): for column in clustering_df.columns: if key == column: clustering_df[column] = label_encoders[key].inverse_transform(clustering_df[column]) clustering_df.head() class_0_params = clustering_df[clustering_df['targets_params']==0].copy() class_1_params = clustering_df[clustering_df['targets_params']==1].copy() class_2_params = clustering_df[clustering_df['targets_params']==2].copy() class_0_params.info(), class_1_params.info(), class_2_params.info() from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.pipeline import Pipeline from sklearn.cluster import MiniBatchKMeans tfidf_clf = Pipeline([('tfidf', TfidfVectorizer()), ('kmeans', MiniBatchKMeans(n_clusters=3, init='random', random_state=0))]) tfidf_clf.fit(clustering_df['description']) predictions_tfidf = tfidf_clf.predict(clustering_df['description']) predictions_tfidf clustering_df['targets_text'] = predictions_tfidf clustering_df.head() class_0_text = clustering_df[clustering_df['targets_text']==0].copy() class_1_text = clustering_df[clustering_df['targets_text']==1].copy() class_2_text = clustering_df[clustering_df['targets_text']==2].copy() class_0_text.info(), class_1_text.info(), class_2_text.info() matrix = TfidfVectorizer().fit_transform(class_0_text['description']) vectorizer = TfidfVectorizer().fit(class_0_text['description']) vectorizer.get_feature_names() def remove_nums(text): import string new_text = [] no_digit = True for el in text.split(): for c in list(el): if c in string.digits: no_digit = False break if no_digit and len(el)>2: new_text.append(el) no_digit = True return ' '.join(new_text) remove_nums('Царський будинок. Без ліфта. Неповторна квартира в центрі міста! Ремонту 6 років. Просторі та світлі кімнати. Висота стелі 3.7м. В...') class_0_text['description'] = class_0_text['description'].apply(remove_nums) class_1_text['description'] = class_1_text['description'].apply(remove_nums) class_2_text['description'] = class_2_text['description'].apply(remove_nums) vectorizer0 = TfidfVectorizer().fit(class_0_text['description']) values_words = dict(zip(vectorizer0.get_feature_names(), vectorizer0.idf_)) words_df_cls0 = pd.DataFrame({'word': list(values_words.keys()), 'score': list(values_words.values())}, index=range(0,len(values_words))) del values_words words_df_cls0.sort_values('score', ascending=False).hist() vectorizer1 = TfidfVectorizer().fit(class_1_text['description']) values_words = dict(zip(vectorizer1.get_feature_names(), vectorizer1.idf_)) words_df_cls1 = pd.DataFrame({'word': list(values_words.keys()), 'score': list(values_words.values())}, index=range(0,len(values_words))) del values_words words_df_cls1.sort_values('score', ascending=False).hist() vectorizer2 = TfidfVectorizer().fit(class_2_text['description']) values_words = dict(zip(vectorizer2.get_feature_names(), vectorizer2.idf_)) words_df_cls2 = pd.DataFrame({'word': list(values_words.keys()), 'score': list(values_words.values())}, index=range(0,len(values_words))) del values_words words_df_cls2.sort_values('score', ascending=False).hist() ###Output _____no_output_____
10.Algorithms_Data_Structure/Searching_n_Sorting/04 Order Statistics.ipynb
###Markdown K’th Smallest/Largest Element in Unsorted Array | Set 1 Given an array and a number k where k is smaller than size of array, we need to find the k’th smallest element in the given array. It is given that ll array elements are distinct.**Example**:```bashInput: arr[] = {7, 10, 4, 3, 20, 15}k = 3Output: 7Input: arr[] = {7, 10, 4, 3, 20, 15}k = 4Output: 10``` k-largest(or k-smallest) elements in an array | added Min Heap method**Question**: Write an efficient program for printing k largest elements in an array. Elements in array can be in any order.For example, if given array is `[1, 23, 12, 9, 30, 2, 50]` and you are asked for the largest 3 elements i.e., `k = 3` then your program should print 50, 30 and 23. Method 1 (Use Bubble k times)Thanks to Shailendra for suggesting this approach.1. Modify Bubble Sort to run the outer loop at most k times.2. Print the last k elements of the array obtained in step 1.Time Complexity: O(nk)Like Bubble sort, other sorting algorithms like Selection Sort can also be modified to get the k largest elements. Bubble SortBubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements if they are in wrong order. ###Code def bubbleSort(arr): n = len(arr) # Traverse through all array elements for i in range(n): # Last i elements are already in place for j in range(0, n-i-1): # traverse the array from 0 to n-i-1 # Swap if the element found is greater # than the next element if arr[j] > arr[j+1] : arr[j], arr[j+1] = arr[j+1], arr[j] return arr # Driver code to test above arr = [64, 34, 25, 12, 22, 11, 90] print(f"Original array is: {arr}") bubbleSort(arr) print(f"Sorted array is : {arr}") ###Output Original array is: [64, 34, 25, 12, 22, 11, 90] Sorted array is : [11, 12, 22, 25, 34, 64, 90] ###Markdown **Optimized Implementation**:The above function always runs $O(n^2)$ time even if the array is sorted. It can be optimized by stopping the algorithm if inner loop didn’t cause any swap. ###Code def bubbleSort(arr): n = len(arr) # Traverse through all array elements for i in range(n): swapped = False # Last i elements are already in place for j in range(0, n-i-1): # traverse the array from 0 to n-i-1 # Swap if the element found is greater # than the next element if arr[j] > arr[j+1] : arr[j], arr[j+1] = arr[j+1], arr[j] swapped = True if not swapped: break return arr # Driver code to test above # arr = [64, 34, 25, 12, 22, 11, 90] arr = [x for x in range(10)] print(f"Original array is: {arr}") bubbleSort(arr) print(f"Sorted array is : {arr}") ###Output Original array is: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] Sorted array is : [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ###Markdown - **Worst and Average Case Time Complexity**: $O(n^2)$. Worst case occurs when array is reverse sorted.- **Best Case Time Complexity**: $O(n)$. Best case occurs when array is already sorted.- **Auxiliary Space**: $O(1)$- **Boundary Cases**: Bubble sort takes minimum time (Order of n) when elements are already sorted.- **Sorting In Place**: Yes- **Stable**: YesDue to its simplicity, bubble sort is often used to introduce the concept of a sorting algorithm.In computer graphics it is popular for its capability to detect a very small error (like swap of just two elements) in almost-sorted arrays and fix it with just linear complexity (2n). For example, it is used in a polygon filling algorithm, where bounding lines are sorted by their x coordinate at a specific scan line (a line parallel to x axis) and with incrementing y their order changes (two elements are swapped) only at intersections of two lines Method 2 (Use temporary array)K largest elements from `arr[0..n-1]`1. Store the first k elements in a temporary array `temp[0..k-1]`.2. Find the smallest element in `temp[]`, let the smallest element be `min`.3. For each element x in `arr[k]` to `arr[n-1]`. $O(n-k)$. If x is greater than the min then remove min from `temp[]` and insert `x`.4. Then, determine the new min from `temp[]`. $O(k)$.5. Print final `k` elements of `temp[]`.Time Complexity: $O((n-k)*k)$. If we want the output sorted then $O((n-k)*k + klogk)$ Method 3 (Use Sorting)1. Sort the elements in descending order in $O(n\log n)$.2. Print the first k numbers of the sorted array $O(k)$.Following is the implementation of above.Time complexity: $O(n\log n)$ ###Code ''' Python3 code for k largest elements in an array''' def kLargest(arr, k): # Sort the given array arr in reverse # order. arr_sorted = sorted(arr, reverse = True) # TimSort # Print the first kth largest elements # for i in range(k): # print (arr[i], end =" ") return arr_sorted[k-1] # Driver code to test above arr = [64, 34, 25, 12, 22, 11, 90] k = 1 topk = kLargest(arr, k) print(f"Original array is: {arr}") print(f"The {k}'s largest element is: {topk}") ###Output Original array is: [64, 34, 25, 12, 22, 11, 90] The 1's largest element is: 90 ###Markdown Method 4 (QuickSelect)This is an optimization over method 1 if QuickSort is used as a sorting algorithm in first step. In QuickSort, we pick a pivot element, then move the pivot element to its correct position and partition the array around it. The idea is, not to do complete quicksort, but stop at the point where pivot itself is k’th smallest element. Also, not to recur for both left and right sides of pivot, but recur for one of them according to the position of pivot. The worst case time complexity of this method is $O(n^2)$, but it works in $O(n)$ on average. ###Code # This function returns k'th smallest element # in arr[l..r] using QuickSort based method. # ASSUMPTION: ALL ELEMENTS IN ARR[] ARE DISTINCT import sys def kthSmallest(arr, l, r, k): # If k is smaller than number of # elements in array if (k > 0 and k <= r - l + 1): # Partition the array around last # element and get position of pivot # element in sorted array pos = partition(arr, l, r) # If position is same as k if (pos - l == k - 1): return arr[pos] if (pos - l > k - 1): # If position is more, # recur for left subarray return kthSmallest(arr, l, pos - 1, k) # Else recur for right subarray return kthSmallest(arr, pos + 1, r, k - pos + l - 1) # If k is more than number of # elements in array return sys.maxsize # Standard partition process of QuickSort(). # It considers the last element as pivot and # moves all smaller element to left of it # and greater elements to right def partition(arr, l, r): x = arr[r] i = l for j in range(l, r): if (arr[j] <= x): arr[i], arr[j] = arr[j], arr[i] i += 1 arr[i], arr[r] = arr[r], arr[i] return i # Driver Code if __name__ == "__main__": arr = [12, 3, 5, 7, 4, 19, 26] n = len(arr) k = 1 print(f"Original array is: {arr}") print(f"{k}'th smallest element is:{kthSmallest(arr, 0, n - 1, k)}") # This code is contributed by ita_c ###Output Original array is: [12, 3, 5, 7, 4, 19, 26] 1'th smallest element is:3 ###Markdown Randomized QuickSelectThe idea is to randomly pick a pivot element. To implement randomized partition, we use a random function, rand() to generate index between l and r, swap the element at randomly generated index with the last element, and finally call the standard partition process which uses last element as pivot. ###Code # Python3 implementation of randomized # quickSelect import random # This function returns k'th smallest # element in arr[l..r] using QuickSort # based method. ASSUMPTION: ELEMENTS # IN ARR[] ARE DISTINCT def kthSmallest(arr, l, r, k): # If k is smaller than number of # elements in array if (k > 0 and k <= r - l + 1): # Partition the array around a random # element and get position of pivot # element in sorted array pos = randomPartition(arr, l, r) # If position is same as k if (pos - l == k - 1): return arr[pos] if (pos - l > k - 1): # If position is more, # recur for left subarray return kthSmallest(arr, l, pos - 1, k) # Else recur for right subarray return kthSmallest(arr, pos + 1, r, k - pos + l - 1) # If k is more than the number of # elements in the array return 999999999999 # Standard partition process of QuickSort(). # It considers the last element as pivot and # moves all smaller element to left of it and # greater elements to right. This function # is used by randomPartition() def partition(arr, l, r): x = arr[r] i = l for j in range(l, r): if (arr[j] <= x): arr[i], arr[j] = arr[j], arr[i] i += 1 arr[i], arr[r] = arr[r], arr[i] return i # Picks a random pivot element between l and r # and partitions arr[l..r] around the randomly # picked element using partition() def randomPartition(arr, l, r): n = r - l + 1 pivot = int(random.random() % n) arr[l + pivot], arr[r] = arr[l + pivot], arr[r] # move to the right return partition(arr, l, r) # call standard partition function. # Driver Code if __name__ == '__main__': arr = [12, 3, 5, 7, 4, 19, 26] n = len(arr) k = 3 print(f"Original array is: {arr}") print(f"{k}'th smallest element is:{kthSmallest(arr, 0, n - 1, k)}") ###Output Original array is: [12, 3, 5, 7, 4, 19, 26] 3'th smallest element is:5
Notebooks/Lab0v3.ipynb
###Markdown Primer Notebook ESMA 3016 Edgar Acuna Agosto 20, 2019 ###Code x=[3,4,5] print(type(x)) # Una operacion basica elemental z=(3+5*6)/12 print(z) print(type(z)) # Entrando el dato con el teclado age = input("How old are you? ") print(type(age)) print("Your age is", age) print("You have", 65-int(age), "years until retirement") name = "Edgar Acuna Fernandez" length = len(name) #imprimiendo en Mayuscula e imprimiendo la longitud big_name = str.upper(name) print(big_name, "tiene", length, "caracteres") names = ["Ana", "Rosa", "Julia"] names[0] names[-3] #uso del condicional If gpa = 3.4 if gpa > 2.0: print("Su solicitud de admision es aceptada.") # Uso de if/else gpa = 1.4 if gpa >= 2.5: print("Bienvenido al Colegio de Mayaguez!") else: print("Su solicitud de admision ha sido denegada.") #Ejemplo con operadores logicos Ana=3 Rosa=25 if (Ana <= 5 and Rosa >= 10): print("Ana and Rosa") if (Rosa == 500 or Ana != 5): print("Otra vez Ana y Rosa") range(5,10) list(range(5,10)) #Ejemplo de loop for x in range(1, 4): print(x, "squared is", x * x) # Otro ejemplo de loop names = ["Ana", "Rosa", "Julia"] for name in names: print(name) # Ejemplo de break y continue for value in [3, 1, 4, 1, 5, 9, 2]: print("Checking", value) if value > 8: print("Exiting for loop") break elif value < 3: print("Ignoring") continue print("The square is", value**2) #Ejemplo de while number = 1 while number < 200: print(number), number = number * 2 #Sumando una constante 10 a una lista vec1=[3,4,5] [x +10 for x in vec1] #summando dos vectores componente a componente vec2=[9,10,11] for a,b in zip(vec1,vec2): print(a+b) #usando el modulo matematico math import math math.pi #usando el modulo matematico math con el alias m import math as m m.pi #importando solamente la funcion pi del modulo math from math import pi pi cos(sqrt(pi)) #Leyendo un archivo de datos de la internet import pandas as pd df=pd.read_csv("http://academic.uprm.edu/eacuna/Animals2.csv") df.info() df.head() df.tail(10) #Leyendo un archivo de dato almacenado en mi PC #df=pd.read_csv("c:\esma3016\Animals2.csv") for e in __builtins__.__dict__: print(e) import math help(math.sin) ###Output _____no_output_____ ###Markdown Primer Notebook ESMA 3016 Edgar Acuna Enero 14, 2019 ###Code x=[3,4,5] # Una operacion basica elemental (3+5*6)/12 # Entrando el dato con el teclado age = input("How old are you? ") print("Your age is", age) print("You have", 65-int(age), "years until retirement") name = "Edgar Acuna Fernandez" length = len(name) #imprimiendo en Mayuscula e imprimiendo la longitud big_name = str.upper(name) print(big_name, "tiene", length, "caracteres") names = ["Ana", "Rosa", "Julia"] names[0] names[-2] #uso del condicional If gpa = 3.4 if gpa > 2.0: print("Su solicitud de admision es aceptada.") # Uso de if/else gpa = 1.4 if gpa >= 2.5: print("Bienvenido al Colegio de Mayaguez!") else: print("Su solicitud de admision ha sido denegada.") #Ejemplo con operadores logicos Ana=3 Rosa=25 if (Ana <= 5 and Rosa >= 10): print("Ana and Rosa") if (Rosa == 500 or Ana != 5): print("Otra vez Ana y Rosa") range(5,10) list(range(5,10)) #Ejemplo de loop for x in range(1, 4): print(x, "squared is", x * x) # Otro ejemplo de loop names = ["Ana", "Rosa", "Julia"] for name in names: print(name) # Ejemplo de break y continue for value in [3, 1, 4, 1, 5, 9, 2]: print("Checking", value) if value > 8: print("Exiting for loop") break elif value < 3: print("Ignoring") continue print("The square is", value**2) #Ejemplo de while number = 1 while number < 200: print(number), number = number * 2 #Sumando una constante 10 a una lista vec1=[3,4,5] [x +10 for x in vec1] #summando dos vectores componente a componente vec2=[9,10,11] for a,b in zip(vec1,vec2): print(a+b) #usando el modulo matematico math import math math.pi #usando el modulo matematico math con el alias m import math as m m.pi #importando solamente la funcion pi del modulo math from math import pi pi #Leyendo un archivo de datos de la internet import pandas as pd df=pd.read_csv("http://academic.uprm.edu/eacuna/Animals2.csv") df.info() #Leyendo un archivo de dato almacenado en mi PC #df=pd.read_csv("c:\esma3016\Animals2.csv") File=open("c://PW-PR/Animals2.csv").read() print(File) for e in __builtins__.__dict__: print(e) ###Output __name__ __doc__ __package__ __loader__ __spec__ __build_class__ __import__ abs all any ascii bin callable chr compile delattr dir divmod eval exec format getattr globals hasattr hash hex id input isinstance issubclass iter len locals max min next oct ord pow print repr round setattr sorted sum vars None Ellipsis NotImplemented False True bool memoryview bytearray bytes classmethod complex dict enumerate filter float frozenset property int list map object range reversed set slice staticmethod str super tuple type zip __debug__ BaseException Exception TypeError StopAsyncIteration StopIteration GeneratorExit SystemExit KeyboardInterrupt ImportError ModuleNotFoundError OSError EnvironmentError IOError WindowsError EOFError RuntimeError RecursionError NotImplementedError NameError UnboundLocalError AttributeError SyntaxError IndentationError TabError LookupError IndexError KeyError ValueError UnicodeError UnicodeEncodeError UnicodeDecodeError UnicodeTranslateError AssertionError ArithmeticError FloatingPointError OverflowError ZeroDivisionError SystemError ReferenceError BufferError MemoryError Warning UserWarning DeprecationWarning PendingDeprecationWarning SyntaxWarning RuntimeWarning FutureWarning ImportWarning UnicodeWarning BytesWarning ResourceWarning ConnectionError BlockingIOError BrokenPipeError ChildProcessError ConnectionAbortedError ConnectionRefusedError ConnectionResetError FileExistsError FileNotFoundError IsADirectoryError NotADirectoryError InterruptedError PermissionError ProcessLookupError TimeoutError open copyright credits license help __IPYTHON__ display get_ipython
docs/tutorials/text_generation.ipynb
###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Text generation with an RNN View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware accelerator > GPU*.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/guide/keras/sequential_model) and [eager execution](https://www.tensorflow.org/guide/eager). The following is the sample output when the model in this tutorial trained for 30 epochs, and started with the prompt "Q": QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills m While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries ###Code import tensorflow as tf from tensorflow.keras.layers.experimental import preprocessing import numpy as np import os import time ###Output _____no_output_____ ###Markdown Download the Shakespeare datasetChange the following line to run this code on your own data. ###Code path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt') ###Output _____no_output_____ ###Markdown Read the dataFirst, look in the text: ###Code # Read, then decode for py2 compat. text = open(path_to_file, 'rb').read().decode(encoding='utf-8') # length of text is the number of characters in it print(f'Length of text: {len(text)} characters') # Take a look at the first 250 characters in text print(text[:250]) # The unique characters in the file vocab = sorted(set(text)) print(f'{len(vocab)} unique characters') ###Output _____no_output_____ ###Markdown Process the text Vectorize the textBefore training, you need to convert the strings to a numerical representation. The `preprocessing.StringLookup` layer can convert each character into a numeric ID. It just needs the text to be split into tokens first. ###Code example_texts = ['abcdefg', 'xyz'] chars = tf.strings.unicode_split(example_texts, input_encoding='UTF-8') chars ###Output _____no_output_____ ###Markdown Now create the `preprocessing.StringLookup` layer: ###Code ids_from_chars = preprocessing.StringLookup( vocabulary=list(vocab), mask_token=None) ###Output _____no_output_____ ###Markdown It converts form tokens to character IDs: ###Code ids = ids_from_chars(chars) ids ###Output _____no_output_____ ###Markdown Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use `preprocessing.StringLookup(..., invert=True)`. Note: Here instead of passing the original vocabulary generated with `sorted(set(text))` use the `get_vocabulary()` method of the `preprocessing.StringLookup` layer so that the `[UNK]` tokens is set the same way. ###Code chars_from_ids = tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=ids_from_chars.get_vocabulary(), invert=True, mask_token=None) ###Output _____no_output_____ ###Markdown This layer recovers the characters from the vectors of IDs, and returns them as a `tf.RaggedTensor` of characters: ###Code chars = chars_from_ids(ids) chars ###Output _____no_output_____ ###Markdown You can `tf.strings.reduce_join` to join the characters back into strings. ###Code tf.strings.reduce_join(chars, axis=-1).numpy() def text_from_ids(ids): return tf.strings.reduce_join(chars_from_ids(ids), axis=-1) ###Output _____no_output_____ ###Markdown The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text.For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices. ###Code all_ids = ids_from_chars(tf.strings.unicode_split(text, 'UTF-8')) all_ids ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids) for ids in ids_dataset.take(10): print(chars_from_ids(ids).numpy().decode('utf-8')) seq_length = 100 examples_per_epoch = len(text)//(seq_length+1) ###Output _____no_output_____ ###Markdown The `batch` method lets you easily convert these individual characters to sequences of the desired size. ###Code sequences = ids_dataset.batch(seq_length+1, drop_remainder=True) for seq in sequences.take(1): print(chars_from_ids(seq)) ###Output _____no_output_____ ###Markdown It's easier to see what this is doing if you join the tokens back into strings: ###Code for seq in sequences.take(5): print(text_from_ids(seq).numpy()) ###Output _____no_output_____ ###Markdown For training you'll need a dataset of `(input, label)` pairs. Where `input` and `label` are sequences. At each time step the input is the current character and the label is the next character. Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep: ###Code def split_input_target(sequence): input_text = sequence[:-1] target_text = sequence[1:] return input_text, target_text split_input_target(list("Tensorflow")) dataset = sequences.map(split_input_target) for input_example, target_example in dataset.take(1): print("Input :", text_from_ids(input_example).numpy()) print("Target:", text_from_ids(target_example).numpy()) ###Output _____no_output_____ ###Markdown Create training batchesYou used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches. ###Code # Batch size BATCH_SIZE = 64 # Buffer size to shuffle the dataset # (TF data is designed to work with possibly infinite sequences, # so it doesn't attempt to shuffle the entire sequence in memory. Instead, # it maintains a buffer in which it shuffles elements). BUFFER_SIZE = 10000 dataset = ( dataset .shuffle(BUFFER_SIZE) .batch(BATCH_SIZE, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) dataset ###Output _____no_output_____ ###Markdown Build The Model This section defines the model as a `keras.Model` subclass (For details see [Making new Layers and Models via subclassing](https://www.tensorflow.org/guide/keras/custom_layers_and_models)). This model has three layers:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map each character-ID to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use an LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs. It outputs one logit for each character in the vocabulary. These are the log-likelihood of each character according to the model. ###Code # Length of the vocabulary in chars vocab_size = len(vocab) # The embedding dimension embedding_dim = 256 # Number of RNN units rnn_units = 1024 class MyModel(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, rnn_units): super().__init__(self) self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(rnn_units, return_sequences=True, return_state=True) self.dense = tf.keras.layers.Dense(vocab_size) def call(self, inputs, states=None, return_state=False, training=False): x = inputs x = self.embedding(x, training=training) if states is None: states = self.gru.get_initial_state(x) x, states = self.gru(x, initial_state=states, training=training) x = self.dense(x, training=training) if return_state: return x, states else: return x model = MyModel( # Be sure the vocabulary size matches the `StringLookup` layers. vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) ###Output _____no_output_____ ###Markdown For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:![A drawing of the data passing through the model](images/text_generation_training.png) Note: For training you could use a `keras.Sequential` model here. To generate text later you'll need to manage the RNN's internal state. It's simpler to include the state input and output options upfront, than it is to rearrange the model architecture later. For more details see the [Keras RNN guide](https://www.tensorflow.org/guide/keras/rnnrnn_state_reuse). Try the modelNow run the model to see that it behaves as expected.First check the shape of the output: ###Code for input_example_batch, target_example_batch in dataset.take(1): example_batch_predictions = model(input_example_batch) print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)") ###Output _____no_output_____ ###Markdown In the above example the sequence length of the input is `100` but the model can be run on inputs of any length: ###Code model.summary() ###Output _____no_output_____ ###Markdown To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch: ###Code sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1) sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy() ###Output _____no_output_____ ###Markdown This gives us, at each timestep, a prediction of the next character index: ###Code sampled_indices ###Output _____no_output_____ ###Markdown Decode these to see the text predicted by this untrained model: ###Code print("Input:\n", text_from_ids(input_example_batch[0]).numpy()) print() print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy()) ###Output _____no_output_____ ###Markdown Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_categorical_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions.Because your model returns logits, you need to set the `from_logits` flag. ###Code loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True) example_batch_loss = loss(target_example_batch, example_batch_predictions) mean_loss = example_batch_loss.numpy().mean() print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)") print("Mean loss: ", mean_loss) ###Output _____no_output_____ ###Markdown A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized: ###Code tf.exp(mean_loss).numpy() ###Output _____no_output_____ ###Markdown Configure the training procedure using the `tf.keras.Model.compile` method. Use `tf.keras.optimizers.Adam` with default arguments and the loss function. ###Code model.compile(optimizer='adam', loss=loss) ###Output _____no_output_____ ###Markdown Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training: ###Code # Directory where the checkpoints will be saved checkpoint_dir = './training_checkpoints' # Name of the checkpoint files checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_prefix, save_weights_only=True) ###Output _____no_output_____ ###Markdown Execute the training To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training. ###Code EPOCHS = 20 history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback]) ###Output _____no_output_____ ###Markdown Generate text The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.![To generate text the model's output is fed back to the input](images/text_generation_sampling.png)Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text. The following makes a single step prediction: ###Code class OneStep(tf.keras.Model): def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0): super().__init__() self.temperature = temperature self.model = model self.chars_from_ids = chars_from_ids self.ids_from_chars = ids_from_chars # Create a mask to prevent "[UNK]" from being generated. skip_ids = self.ids_from_chars(['[UNK]'])[:, None] sparse_mask = tf.SparseTensor( # Put a -inf at each bad index. values=[-float('inf')]*len(skip_ids), indices=skip_ids, # Match the shape to the vocabulary dense_shape=[len(ids_from_chars.get_vocabulary())]) self.prediction_mask = tf.sparse.to_dense(sparse_mask) @tf.function def generate_one_step(self, inputs, states=None): # Convert strings to token IDs. input_chars = tf.strings.unicode_split(inputs, 'UTF-8') input_ids = self.ids_from_chars(input_chars).to_tensor() # Run the model. # predicted_logits.shape is [batch, char, next_char_logits] predicted_logits, states = self.model(inputs=input_ids, states=states, return_state=True) # Only use the last prediction. predicted_logits = predicted_logits[:, -1, :] predicted_logits = predicted_logits/self.temperature # Apply the prediction mask: prevent "[UNK]" from being generated. predicted_logits = predicted_logits + self.prediction_mask # Sample the output logits to generate token IDs. predicted_ids = tf.random.categorical(predicted_logits, num_samples=1) predicted_ids = tf.squeeze(predicted_ids, axis=-1) # Convert from token ids to characters predicted_chars = self.chars_from_ids(predicted_ids) # Return the characters and model state. return predicted_chars, states one_step_model = OneStep(model, chars_from_ids, ids_from_chars) ###Output _____no_output_____ ###Markdown Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result[0].numpy().decode('utf-8'), '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown The easiest thing you can do to improve the results is to train it for longer (try `EPOCHS = 30`).You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions. If you want the model to generate text *faster* the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result, '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown Export the generatorThis single-step model can easily be [saved and restored](https://www.tensorflow.org/guide/saved_model), allowing you to use it anywhere a `tf.saved_model` is accepted. ###Code tf.saved_model.save(one_step_model, 'one_step') one_step_reloaded = tf.saved_model.load('one_step') states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(100): next_char, states = one_step_reloaded.generate_one_step(next_char, states=states) result.append(next_char) print(tf.strings.join(result)[0].numpy().decode("utf-8")) ###Output _____no_output_____ ###Markdown Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.It uses teacher-forcing which prevents bad predictions from being fed back to the model, so the model never learns to recover from mistakes.So now that you've seen how to run the model manually next you'll implement the training loop. This gives a starting point if, for example, you want to implement _curriculum learning_ to help stabilize the model's open-loop output.The most important part of a custom training loop is the train step function.Use `tf.GradientTape` to track the gradients. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The basic procedure is:1. Execute the model and calculate the loss under a `tf.GradientTape`.2. Calculate the updates and apply them to the model using the optimizer. ###Code class CustomTraining(MyModel): @tf.function def train_step(self, inputs): inputs, labels = inputs with tf.GradientTape() as tape: predictions = self(inputs, training=True) loss = self.loss(labels, predictions) grads = tape.gradient(loss, model.trainable_variables) self.optimizer.apply_gradients(zip(grads, model.trainable_variables)) return {'loss': loss} ###Output _____no_output_____ ###Markdown The above implementation of the `train_step` method follows [Keras' `train_step` conventions](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This is optional, but it allows you to change the behavior of the train step and still use keras' `Model.compile` and `Model.fit` methods. ###Code model = CustomTraining( vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) model.compile(optimizer = tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)) model.fit(dataset, epochs=1) ###Output _____no_output_____ ###Markdown Or if you need more control, you can write your own complete custom training loop: ###Code EPOCHS = 10 mean = tf.metrics.Mean() for epoch in range(EPOCHS): start = time.time() mean.reset_states() for (batch_n, (inp, target)) in enumerate(dataset): logs = model.train_step([inp, target]) mean.update_state(logs['loss']) if batch_n % 50 == 0: template = f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}" print(template) # saving (checkpoint) the model every 5 epochs if (epoch + 1) % 5 == 0: model.save_weights(checkpoint_prefix.format(epoch=epoch)) print() print(f'Epoch {epoch+1} Loss: {mean.result().numpy():.4f}') print(f'Time taken for 1 epoch {time.time() - start:.2f} sec') print("_"*80) model.save_weights(checkpoint_prefix.format(epoch=epoch)) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Text generation with an RNN View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware accelerator > GPU*.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/guide/keras/sequential_model) and [eager execution](https://www.tensorflow.org/guide/eager). The following is the sample output when the model in this tutorial trained for 30 epochs, and started with the prompt "Q": QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills m While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries ###Code import tensorflow as tf import numpy as np import os import time ###Output _____no_output_____ ###Markdown Download the Shakespeare datasetChange the following line to run this code on your own data. ###Code path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt') ###Output _____no_output_____ ###Markdown Read the dataFirst, look in the text: ###Code # Read, then decode for py2 compat. text = open(path_to_file, 'rb').read().decode(encoding='utf-8') # length of text is the number of characters in it print(f'Length of text: {len(text)} characters') # Take a look at the first 250 characters in text print(text[:250]) # The unique characters in the file vocab = sorted(set(text)) print(f'{len(vocab)} unique characters') ###Output _____no_output_____ ###Markdown Process the text Vectorize the textBefore training, you need to convert the strings to a numerical representation. The `tf.keras.layers.StringLookup` layer can convert each character into a numeric ID. It just needs the text to be split into tokens first. ###Code example_texts = ['abcdefg', 'xyz'] chars = tf.strings.unicode_split(example_texts, input_encoding='UTF-8') chars ###Output _____no_output_____ ###Markdown Now create the `tf.keras.layers.StringLookup` layer: ###Code ids_from_chars = tf.keras.layers.StringLookup( vocabulary=list(vocab), mask_token=None) ###Output _____no_output_____ ###Markdown It converts from tokens to character IDs: ###Code ids = ids_from_chars(chars) ids ###Output _____no_output_____ ###Markdown Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use `tf.keras.layers.StringLookup(..., invert=True)`. Note: Here instead of passing the original vocabulary generated with `sorted(set(text))` use the `get_vocabulary()` method of the `tf.keras.layers.StringLookup` layer so that the `[UNK]` tokens is set the same way. ###Code chars_from_ids = tf.keras.layers.StringLookup( vocabulary=ids_from_chars.get_vocabulary(), invert=True, mask_token=None) ###Output _____no_output_____ ###Markdown This layer recovers the characters from the vectors of IDs, and returns them as a `tf.RaggedTensor` of characters: ###Code chars = chars_from_ids(ids) chars ###Output _____no_output_____ ###Markdown You can `tf.strings.reduce_join` to join the characters back into strings. ###Code tf.strings.reduce_join(chars, axis=-1).numpy() def text_from_ids(ids): return tf.strings.reduce_join(chars_from_ids(ids), axis=-1) ###Output _____no_output_____ ###Markdown The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text.For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices. ###Code all_ids = ids_from_chars(tf.strings.unicode_split(text, 'UTF-8')) all_ids ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids) for ids in ids_dataset.take(10): print(chars_from_ids(ids).numpy().decode('utf-8')) seq_length = 100 examples_per_epoch = len(text)//(seq_length+1) ###Output _____no_output_____ ###Markdown The `batch` method lets you easily convert these individual characters to sequences of the desired size. ###Code sequences = ids_dataset.batch(seq_length+1, drop_remainder=True) for seq in sequences.take(1): print(chars_from_ids(seq)) ###Output _____no_output_____ ###Markdown It's easier to see what this is doing if you join the tokens back into strings: ###Code for seq in sequences.take(5): print(text_from_ids(seq).numpy()) ###Output _____no_output_____ ###Markdown For training you'll need a dataset of `(input, label)` pairs. Where `input` and `label` are sequences. At each time step the input is the current character and the label is the next character. Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep: ###Code def split_input_target(sequence): input_text = sequence[:-1] target_text = sequence[1:] return input_text, target_text split_input_target(list("Tensorflow")) dataset = sequences.map(split_input_target) for input_example, target_example in dataset.take(1): print("Input :", text_from_ids(input_example).numpy()) print("Target:", text_from_ids(target_example).numpy()) ###Output _____no_output_____ ###Markdown Create training batchesYou used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches. ###Code # Batch size BATCH_SIZE = 64 # Buffer size to shuffle the dataset # (TF data is designed to work with possibly infinite sequences, # so it doesn't attempt to shuffle the entire sequence in memory. Instead, # it maintains a buffer in which it shuffles elements). BUFFER_SIZE = 10000 dataset = ( dataset .shuffle(BUFFER_SIZE) .batch(BATCH_SIZE, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) dataset ###Output _____no_output_____ ###Markdown Build The Model This section defines the model as a `keras.Model` subclass (For details see [Making new Layers and Models via subclassing](https://www.tensorflow.org/guide/keras/custom_layers_and_models)). This model has three layers:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map each character-ID to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use an LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs. It outputs one logit for each character in the vocabulary. These are the log-likelihood of each character according to the model. ###Code # Length of the vocabulary in chars vocab_size = len(vocab) # The embedding dimension embedding_dim = 256 # Number of RNN units rnn_units = 1024 class MyModel(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, rnn_units): super().__init__(self) self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(rnn_units, return_sequences=True, return_state=True) self.dense = tf.keras.layers.Dense(vocab_size) def call(self, inputs, states=None, return_state=False, training=False): x = inputs x = self.embedding(x, training=training) if states is None: states = self.gru.get_initial_state(x) x, states = self.gru(x, initial_state=states, training=training) x = self.dense(x, training=training) if return_state: return x, states else: return x model = MyModel( # Be sure the vocabulary size matches the `StringLookup` layers. vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) ###Output _____no_output_____ ###Markdown For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:![A drawing of the data passing through the model](images/text_generation_training.png) Note: For training you could use a `keras.Sequential` model here. To generate text later you'll need to manage the RNN's internal state. It's simpler to include the state input and output options upfront, than it is to rearrange the model architecture later. For more details see the [Keras RNN guide](https://www.tensorflow.org/guide/keras/rnnrnn_state_reuse). Try the modelNow run the model to see that it behaves as expected.First check the shape of the output: ###Code for input_example_batch, target_example_batch in dataset.take(1): example_batch_predictions = model(input_example_batch) print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)") ###Output _____no_output_____ ###Markdown In the above example the sequence length of the input is `100` but the model can be run on inputs of any length: ###Code model.summary() ###Output _____no_output_____ ###Markdown To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch: ###Code sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1) sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy() ###Output _____no_output_____ ###Markdown This gives us, at each timestep, a prediction of the next character index: ###Code sampled_indices ###Output _____no_output_____ ###Markdown Decode these to see the text predicted by this untrained model: ###Code print("Input:\n", text_from_ids(input_example_batch[0]).numpy()) print() print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy()) ###Output _____no_output_____ ###Markdown Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_categorical_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions.Because your model returns logits, you need to set the `from_logits` flag. ###Code loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True) example_batch_loss = loss(target_example_batch, example_batch_predictions) mean_loss = example_batch_loss.numpy().mean() print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)") print("Mean loss: ", mean_loss) ###Output _____no_output_____ ###Markdown A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized: ###Code tf.exp(mean_loss).numpy() ###Output _____no_output_____ ###Markdown Configure the training procedure using the `tf.keras.Model.compile` method. Use `tf.keras.optimizers.Adam` with default arguments and the loss function. ###Code model.compile(optimizer='adam', loss=loss) ###Output _____no_output_____ ###Markdown Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training: ###Code # Directory where the checkpoints will be saved checkpoint_dir = './training_checkpoints' # Name of the checkpoint files checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_prefix, save_weights_only=True) ###Output _____no_output_____ ###Markdown Execute the training To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training. ###Code EPOCHS = 20 history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback]) ###Output _____no_output_____ ###Markdown Generate text The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.![To generate text the model's output is fed back to the input](images/text_generation_sampling.png)Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text. The following makes a single step prediction: ###Code class OneStep(tf.keras.Model): def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0): super().__init__() self.temperature = temperature self.model = model self.chars_from_ids = chars_from_ids self.ids_from_chars = ids_from_chars # Create a mask to prevent "[UNK]" from being generated. skip_ids = self.ids_from_chars(['[UNK]'])[:, None] sparse_mask = tf.SparseTensor( # Put a -inf at each bad index. values=[-float('inf')]*len(skip_ids), indices=skip_ids, # Match the shape to the vocabulary dense_shape=[len(ids_from_chars.get_vocabulary())]) self.prediction_mask = tf.sparse.to_dense(sparse_mask) @tf.function def generate_one_step(self, inputs, states=None): # Convert strings to token IDs. input_chars = tf.strings.unicode_split(inputs, 'UTF-8') input_ids = self.ids_from_chars(input_chars).to_tensor() # Run the model. # predicted_logits.shape is [batch, char, next_char_logits] predicted_logits, states = self.model(inputs=input_ids, states=states, return_state=True) # Only use the last prediction. predicted_logits = predicted_logits[:, -1, :] predicted_logits = predicted_logits/self.temperature # Apply the prediction mask: prevent "[UNK]" from being generated. predicted_logits = predicted_logits + self.prediction_mask # Sample the output logits to generate token IDs. predicted_ids = tf.random.categorical(predicted_logits, num_samples=1) predicted_ids = tf.squeeze(predicted_ids, axis=-1) # Convert from token ids to characters predicted_chars = self.chars_from_ids(predicted_ids) # Return the characters and model state. return predicted_chars, states one_step_model = OneStep(model, chars_from_ids, ids_from_chars) ###Output _____no_output_____ ###Markdown Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result[0].numpy().decode('utf-8'), '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown The easiest thing you can do to improve the results is to train it for longer (try `EPOCHS = 30`).You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions. If you want the model to generate text *faster* the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result, '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown Export the generatorThis single-step model can easily be [saved and restored](https://www.tensorflow.org/guide/saved_model), allowing you to use it anywhere a `tf.saved_model` is accepted. ###Code tf.saved_model.save(one_step_model, 'one_step') one_step_reloaded = tf.saved_model.load('one_step') states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(100): next_char, states = one_step_reloaded.generate_one_step(next_char, states=states) result.append(next_char) print(tf.strings.join(result)[0].numpy().decode("utf-8")) ###Output _____no_output_____ ###Markdown Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.It uses teacher-forcing which prevents bad predictions from being fed back to the model, so the model never learns to recover from mistakes.So now that you've seen how to run the model manually next you'll implement the training loop. This gives a starting point if, for example, you want to implement _curriculum learning_ to help stabilize the model's open-loop output.The most important part of a custom training loop is the train step function.Use `tf.GradientTape` to track the gradients. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The basic procedure is:1. Execute the model and calculate the loss under a `tf.GradientTape`.2. Calculate the updates and apply them to the model using the optimizer. ###Code class CustomTraining(MyModel): @tf.function def train_step(self, inputs): inputs, labels = inputs with tf.GradientTape() as tape: predictions = self(inputs, training=True) loss = self.loss(labels, predictions) grads = tape.gradient(loss, model.trainable_variables) self.optimizer.apply_gradients(zip(grads, model.trainable_variables)) return {'loss': loss} ###Output _____no_output_____ ###Markdown The above implementation of the `train_step` method follows [Keras' `train_step` conventions](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This is optional, but it allows you to change the behavior of the train step and still use keras' `Model.compile` and `Model.fit` methods. ###Code model = CustomTraining( vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) model.compile(optimizer = tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)) model.fit(dataset, epochs=1) ###Output _____no_output_____ ###Markdown Or if you need more control, you can write your own complete custom training loop: ###Code EPOCHS = 10 mean = tf.metrics.Mean() for epoch in range(EPOCHS): start = time.time() mean.reset_states() for (batch_n, (inp, target)) in enumerate(dataset): logs = model.train_step([inp, target]) mean.update_state(logs['loss']) if batch_n % 50 == 0: template = f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}" print(template) # saving (checkpoint) the model every 5 epochs if (epoch + 1) % 5 == 0: model.save_weights(checkpoint_prefix.format(epoch=epoch)) print() print(f'Epoch {epoch+1} Loss: {mean.result().numpy():.4f}') print(f'Time taken for 1 epoch {time.time() - start:.2f} sec') print("_"*80) model.save_weights(checkpoint_prefix.format(epoch=epoch)) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Text generation with an RNN View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware accelerator > GPU*.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/guide/keras/sequential_model) and [eager execution](https://www.tensorflow.org/guide/eager). The following is the sample output when the model in this tutorial trained for 30 epochs, and started with the prompt "Q": QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills m While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries ###Code import tensorflow as tf import numpy as np import os import time ###Output _____no_output_____ ###Markdown Download the Shakespeare datasetChange the following line to run this code on your own data. ###Code path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt') ###Output _____no_output_____ ###Markdown Read the dataFirst, look in the text: ###Code # Read, then decode for py2 compat. text = open(path_to_file, 'rb').read().decode(encoding='utf-8') # length of text is the number of characters in it print(f'Length of text: {len(text)} characters') # Take a look at the first 250 characters in text print(text[:250]) # The unique characters in the file vocab = sorted(set(text)) print(f'{len(vocab)} unique characters') ###Output _____no_output_____ ###Markdown Process the text Vectorize the textBefore training, you need to convert the strings to a numerical representation. The `tf.keras.layers.StringLookup` layer can convert each character into a numeric ID. It just needs the text to be split into tokens first. ###Code example_texts = ['abcdefg', 'xyz'] chars = tf.strings.unicode_split(example_texts, input_encoding='UTF-8') chars ###Output _____no_output_____ ###Markdown Now create the `tf.keras.layers.StringLookup` layer: ###Code ids_from_chars = tf.keras.layers.StringLookup( vocabulary=list(vocab), mask_token=None) ###Output _____no_output_____ ###Markdown It converts from tokens to character IDs: ###Code ids = ids_from_chars(chars) ids ###Output _____no_output_____ ###Markdown Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use `tf.keras.layers.StringLookup(..., invert=True)`. Note: Here instead of passing the original vocabulary generated with `sorted(set(text))` use the `get_vocabulary()` method of the `tf.keras.layers.StringLookup` layer so that the `[UNK]` tokens is set the same way. ###Code chars_from_ids = tf.keras.layers.StringLookup( vocabulary=ids_from_chars.get_vocabulary(), invert=True, mask_token=None) ###Output _____no_output_____ ###Markdown This layer recovers the characters from the vectors of IDs, and returns them as a `tf.RaggedTensor` of characters: ###Code chars = chars_from_ids(ids) chars ###Output _____no_output_____ ###Markdown You can `tf.strings.reduce_join` to join the characters back into strings. ###Code tf.strings.reduce_join(chars, axis=-1).numpy() def text_from_ids(ids): return tf.strings.reduce_join(chars_from_ids(ids), axis=-1) ###Output _____no_output_____ ###Markdown The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text.For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices. ###Code all_ids = ids_from_chars(tf.strings.unicode_split(text, 'UTF-8')) all_ids ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids) for ids in ids_dataset.take(10): print(chars_from_ids(ids).numpy().decode('utf-8')) seq_length = 100 examples_per_epoch = len(text)//(seq_length+1) ###Output _____no_output_____ ###Markdown The `batch` method lets you easily convert these individual characters to sequences of the desired size. ###Code sequences = ids_dataset.batch(seq_length+1, drop_remainder=True) for seq in sequences.take(1): print(chars_from_ids(seq)) ###Output _____no_output_____ ###Markdown It's easier to see what this is doing if you join the tokens back into strings: ###Code for seq in sequences.take(5): print(text_from_ids(seq).numpy()) ###Output _____no_output_____ ###Markdown For training you'll need a dataset of `(input, label)` pairs. Where `input` and `label` are sequences. At each time step the input is the current character and the label is the next character. Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep: ###Code def split_input_target(sequence): input_text = sequence[:-1] target_text = sequence[1:] return input_text, target_text split_input_target(list("Tensorflow")) dataset = sequences.map(split_input_target) for input_example, target_example in dataset.take(1): print("Input :", text_from_ids(input_example).numpy()) print("Target:", text_from_ids(target_example).numpy()) ###Output _____no_output_____ ###Markdown Create training batchesYou used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches. ###Code # Batch size BATCH_SIZE = 64 # Buffer size to shuffle the dataset # (TF data is designed to work with possibly infinite sequences, # so it doesn't attempt to shuffle the entire sequence in memory. Instead, # it maintains a buffer in which it shuffles elements). BUFFER_SIZE = 10000 dataset = ( dataset .shuffle(BUFFER_SIZE) .batch(BATCH_SIZE, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) dataset ###Output _____no_output_____ ###Markdown Build The Model This section defines the model as a `keras.Model` subclass (For details see [Making new Layers and Models via subclassing](https://www.tensorflow.org/guide/keras/custom_layers_and_models)). This model has three layers:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map each character-ID to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use an LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs. It outputs one logit for each character in the vocabulary. These are the log-likelihood of each character according to the model. ###Code # Length of the vocabulary in chars vocab_size = len(vocab) # The embedding dimension embedding_dim = 256 # Number of RNN units rnn_units = 1024 class MyModel(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, rnn_units): super().__init__(self) self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(rnn_units, return_sequences=True, return_state=True) self.dense = tf.keras.layers.Dense(vocab_size) def call(self, inputs, states=None, return_state=False, training=False): x = inputs x = self.embedding(x, training=training) if states is None: states = self.gru.get_initial_state(x) x, states = self.gru(x, initial_state=states, training=training) x = self.dense(x, training=training) if return_state: return x, states else: return x model = MyModel( # Be sure the vocabulary size matches the `StringLookup` layers. vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) ###Output _____no_output_____ ###Markdown For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:![A drawing of the data passing through the model](images/text_generation_training.png) Note: For training you could use a `keras.Sequential` model here. To generate text later you'll need to manage the RNN's internal state. It's simpler to include the state input and output options upfront, than it is to rearrange the model architecture later. For more details see the [Keras RNN guide](https://www.tensorflow.org/guide/keras/rnnrnn_state_reuse). Try the modelNow run the model to see that it behaves as expected.First check the shape of the output: ###Code for input_example_batch, target_example_batch in dataset.take(1): example_batch_predictions = model(input_example_batch) print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)") ###Output _____no_output_____ ###Markdown In the above example the sequence length of the input is `100` but the model can be run on inputs of any length: ###Code model.summary() ###Output _____no_output_____ ###Markdown To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch: ###Code sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1) sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy() ###Output _____no_output_____ ###Markdown This gives us, at each timestep, a prediction of the next character index: ###Code sampled_indices ###Output _____no_output_____ ###Markdown Decode these to see the text predicted by this untrained model: ###Code print("Input:\n", text_from_ids(input_example_batch[0]).numpy()) print() print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy()) ###Output _____no_output_____ ###Markdown Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_categorical_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions.Because your model returns logits, you need to set the `from_logits` flag. ###Code loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True) example_batch_mean_loss = loss(target_example_batch, example_batch_predictions) print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)") print("Mean loss: ", example_batch_mean_loss) ###Output _____no_output_____ ###Markdown A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized: ###Code tf.exp(example_batch_mean_loss).numpy() ###Output _____no_output_____ ###Markdown Configure the training procedure using the `tf.keras.Model.compile` method. Use `tf.keras.optimizers.Adam` with default arguments and the loss function. ###Code model.compile(optimizer='adam', loss=loss) ###Output _____no_output_____ ###Markdown Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training: ###Code # Directory where the checkpoints will be saved checkpoint_dir = './training_checkpoints' # Name of the checkpoint files checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_prefix, save_weights_only=True) ###Output _____no_output_____ ###Markdown Execute the training To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training. ###Code EPOCHS = 20 history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback]) ###Output _____no_output_____ ###Markdown Generate text The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.![To generate text the model's output is fed back to the input](images/text_generation_sampling.png)Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text. The following makes a single step prediction: ###Code class OneStep(tf.keras.Model): def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0): super().__init__() self.temperature = temperature self.model = model self.chars_from_ids = chars_from_ids self.ids_from_chars = ids_from_chars # Create a mask to prevent "[UNK]" from being generated. skip_ids = self.ids_from_chars(['[UNK]'])[:, None] sparse_mask = tf.SparseTensor( # Put a -inf at each bad index. values=[-float('inf')]*len(skip_ids), indices=skip_ids, # Match the shape to the vocabulary dense_shape=[len(ids_from_chars.get_vocabulary())]) self.prediction_mask = tf.sparse.to_dense(sparse_mask) @tf.function def generate_one_step(self, inputs, states=None): # Convert strings to token IDs. input_chars = tf.strings.unicode_split(inputs, 'UTF-8') input_ids = self.ids_from_chars(input_chars).to_tensor() # Run the model. # predicted_logits.shape is [batch, char, next_char_logits] predicted_logits, states = self.model(inputs=input_ids, states=states, return_state=True) # Only use the last prediction. predicted_logits = predicted_logits[:, -1, :] predicted_logits = predicted_logits/self.temperature # Apply the prediction mask: prevent "[UNK]" from being generated. predicted_logits = predicted_logits + self.prediction_mask # Sample the output logits to generate token IDs. predicted_ids = tf.random.categorical(predicted_logits, num_samples=1) predicted_ids = tf.squeeze(predicted_ids, axis=-1) # Convert from token ids to characters predicted_chars = self.chars_from_ids(predicted_ids) # Return the characters and model state. return predicted_chars, states one_step_model = OneStep(model, chars_from_ids, ids_from_chars) ###Output _____no_output_____ ###Markdown Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result[0].numpy().decode('utf-8'), '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown The easiest thing you can do to improve the results is to train it for longer (try `EPOCHS = 30`).You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions. If you want the model to generate text *faster* the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result, '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown Export the generatorThis single-step model can easily be [saved and restored](https://www.tensorflow.org/guide/saved_model), allowing you to use it anywhere a `tf.saved_model` is accepted. ###Code tf.saved_model.save(one_step_model, 'one_step') one_step_reloaded = tf.saved_model.load('one_step') states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(100): next_char, states = one_step_reloaded.generate_one_step(next_char, states=states) result.append(next_char) print(tf.strings.join(result)[0].numpy().decode("utf-8")) ###Output _____no_output_____ ###Markdown Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.It uses teacher-forcing which prevents bad predictions from being fed back to the model, so the model never learns to recover from mistakes.So now that you've seen how to run the model manually next you'll implement the training loop. This gives a starting point if, for example, you want to implement _curriculum learning_ to help stabilize the model's open-loop output.The most important part of a custom training loop is the train step function.Use `tf.GradientTape` to track the gradients. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The basic procedure is:1. Execute the model and calculate the loss under a `tf.GradientTape`.2. Calculate the updates and apply them to the model using the optimizer. ###Code class CustomTraining(MyModel): @tf.function def train_step(self, inputs): inputs, labels = inputs with tf.GradientTape() as tape: predictions = self(inputs, training=True) loss = self.loss(labels, predictions) grads = tape.gradient(loss, model.trainable_variables) self.optimizer.apply_gradients(zip(grads, model.trainable_variables)) return {'loss': loss} ###Output _____no_output_____ ###Markdown The above implementation of the `train_step` method follows [Keras' `train_step` conventions](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This is optional, but it allows you to change the behavior of the train step and still use keras' `Model.compile` and `Model.fit` methods. ###Code model = CustomTraining( vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) model.compile(optimizer = tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)) model.fit(dataset, epochs=1) ###Output _____no_output_____ ###Markdown Or if you need more control, you can write your own complete custom training loop: ###Code EPOCHS = 10 mean = tf.metrics.Mean() for epoch in range(EPOCHS): start = time.time() mean.reset_states() for (batch_n, (inp, target)) in enumerate(dataset): logs = model.train_step([inp, target]) mean.update_state(logs['loss']) if batch_n % 50 == 0: template = f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}" print(template) # saving (checkpoint) the model every 5 epochs if (epoch + 1) % 5 == 0: model.save_weights(checkpoint_prefix.format(epoch=epoch)) print() print(f'Epoch {epoch+1} Loss: {mean.result().numpy():.4f}') print(f'Time taken for 1 epoch {time.time() - start:.2f} sec') print("_"*80) model.save_weights(checkpoint_prefix.format(epoch=epoch)) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Text generation with an RNN View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware accelerator > GPU*.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/guide/keras/sequential_model) and [eager execution](https://www.tensorflow.org/guide/eager). The following is the sample output when the model in this tutorial trained for 30 epochs, and started with the prompt "Q": QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills m While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries ###Code import tensorflow as tf from tensorflow.keras.layers.experimental import preprocessing import numpy as np import os import time ###Output _____no_output_____ ###Markdown Download the Shakespeare datasetChange the following line to run this code on your own data. ###Code path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt') ###Output _____no_output_____ ###Markdown Read the dataFirst, look in the text: ###Code # Read, then decode for py2 compat. text = open(path_to_file, 'rb').read().decode(encoding='utf-8') # length of text is the number of characters in it print(f'Length of text: {len(text)} characters') # Take a look at the first 250 characters in text print(text[:250]) # The unique characters in the file vocab = sorted(set(text)) print(f'{len(vocab)} unique characters') ###Output _____no_output_____ ###Markdown Process the text Vectorize the textBefore training, you need to convert the strings to a numerical representation. The `preprocessing.StringLookup` layer can convert each character into a numeric ID. It just needs the text to be split into tokens first. ###Code example_texts = ['abcdefg', 'xyz'] chars = tf.strings.unicode_split(example_texts, input_encoding='UTF-8') chars ###Output _____no_output_____ ###Markdown Now create the `preprocessing.StringLookup` layer: ###Code ids_from_chars = preprocessing.StringLookup( vocabulary=list(vocab)) ###Output _____no_output_____ ###Markdown It converts form tokens to character IDs, padding with `0`: ###Code ids = ids_from_chars(chars) ids ###Output _____no_output_____ ###Markdown Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use `preprocessing.StringLookup(..., invert=True)`. Note: Here instead of passing the original vocabulary generated with `sorted(set(text))` use the `get_vocabulary()` method of the `preprocessing.StringLookup` layer so that the padding and `[UNK]` tokens are set the same way. ###Code chars_from_ids = tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=ids_from_chars.get_vocabulary(), invert=True) ###Output _____no_output_____ ###Markdown This layer recovers the characters from the vectors of IDs, and returns them as a `tf.RaggedTensor` of characters: ###Code chars = chars_from_ids(ids) chars ###Output _____no_output_____ ###Markdown You can `tf.strings.reduce_join` to join the characters back into strings. ###Code tf.strings.reduce_join(chars, axis=-1).numpy() def text_from_ids(ids): return tf.strings.reduce_join(chars_from_ids(ids), axis=-1) ###Output _____no_output_____ ###Markdown The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text.For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices. ###Code all_ids = ids_from_chars(tf.strings.unicode_split(text, 'UTF-8')) all_ids ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids) for ids in ids_dataset.take(10): print(chars_from_ids(ids).numpy().decode('utf-8')) seq_length = 100 examples_per_epoch = len(text)//(seq_length+1) ###Output _____no_output_____ ###Markdown The `batch` method lets you easily convert these individual characters to sequences of the desired size. ###Code sequences = ids_dataset.batch(seq_length+1, drop_remainder=True) for seq in sequences.take(1): print(chars_from_ids(seq)) ###Output _____no_output_____ ###Markdown It's easier to see what this is doing if you join the tokens back into strings: ###Code for seq in sequences.take(5): print(text_from_ids(seq).numpy()) ###Output _____no_output_____ ###Markdown For training you'll need a dataset of `(input, label)` pairs. Where `input` and `label` are sequences. At each time step the input is the current character and the label is the next character. Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep: ###Code def split_input_target(sequence): input_text = sequence[:-1] target_text = sequence[1:] return input_text, target_text split_input_target(list("Tensorflow")) dataset = sequences.map(split_input_target) for input_example, target_example in dataset.take(1): print("Input :", text_from_ids(input_example).numpy()) print("Target:", text_from_ids(target_example).numpy()) ###Output _____no_output_____ ###Markdown Create training batchesYou used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches. ###Code # Batch size BATCH_SIZE = 64 # Buffer size to shuffle the dataset # (TF data is designed to work with possibly infinite sequences, # so it doesn't attempt to shuffle the entire sequence in memory. Instead, # it maintains a buffer in which it shuffles elements). BUFFER_SIZE = 10000 dataset = ( dataset .shuffle(BUFFER_SIZE) .batch(BATCH_SIZE, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) dataset ###Output _____no_output_____ ###Markdown Build The Model This section defines the model as a `keras.Model` subclass (For details see [Making new Layers and Models via subclassing](https://www.tensorflow.org/guide/keras/custom_layers_and_models)). This model has three layers:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map each character-ID to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use an LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs. It outputs one logit for each character in the vocabulary. These are the log-likelihood of each character according to the model. ###Code # Length of the vocabulary in chars vocab_size = len(vocab) # The embedding dimension embedding_dim = 256 # Number of RNN units rnn_units = 1024 class MyModel(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, rnn_units): super().__init__(self) self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(rnn_units, return_sequences=True, return_state=True) self.dense = tf.keras.layers.Dense(vocab_size) def call(self, inputs, states=None, return_state=False, training=False): x = inputs x = self.embedding(x, training=training) if states is None: states = self.gru.get_initial_state(x) x, states = self.gru(x, initial_state=states, training=training) x = self.dense(x, training=training) if return_state: return x, states else: return x model = MyModel( # Be sure the vocabulary size matches the `StringLookup` layers. vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) ###Output _____no_output_____ ###Markdown For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:![A drawing of the data passing through the model](images/text_generation_training.png) Note: For training you could use a `keras.Sequential` model here. To generate text later you'll need to manage the RNN's internal state. It's simpler to include the state input and output options upfront, than it is to rearrange the model architecture later. For more details see the [Keras RNN guide](https://www.tensorflow.org/guide/keras/rnnrnn_state_reuse). Try the modelNow run the model to see that it behaves as expected.First check the shape of the output: ###Code for input_example_batch, target_example_batch in dataset.take(1): example_batch_predictions = model(input_example_batch) print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)") ###Output _____no_output_____ ###Markdown In the above example the sequence length of the input is `100` but the model can be run on inputs of any length: ###Code model.summary() ###Output _____no_output_____ ###Markdown To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch: ###Code sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1) sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy() ###Output _____no_output_____ ###Markdown This gives us, at each timestep, a prediction of the next character index: ###Code sampled_indices ###Output _____no_output_____ ###Markdown Decode these to see the text predicted by this untrained model: ###Code print("Input:\n", text_from_ids(input_example_batch[0]).numpy()) print() print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy()) ###Output _____no_output_____ ###Markdown Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_categorical_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions.Because your model returns logits, you need to set the `from_logits` flag. ###Code loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True) example_batch_loss = loss(target_example_batch, example_batch_predictions) mean_loss = example_batch_loss.numpy().mean() print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)") print("Mean loss: ", mean_loss) ###Output _____no_output_____ ###Markdown A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized: ###Code tf.exp(mean_loss).numpy() ###Output _____no_output_____ ###Markdown Configure the training procedure using the `tf.keras.Model.compile` method. Use `tf.keras.optimizers.Adam` with default arguments and the loss function. ###Code model.compile(optimizer='adam', loss=loss) ###Output _____no_output_____ ###Markdown Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training: ###Code # Directory where the checkpoints will be saved checkpoint_dir = './training_checkpoints' # Name of the checkpoint files checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_prefix, save_weights_only=True) ###Output _____no_output_____ ###Markdown Execute the training To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training. ###Code EPOCHS = 20 history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback]) ###Output _____no_output_____ ###Markdown Generate text The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.![To generate text the model's output is fed back to the input](images/text_generation_sampling.png)Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text. The following makes a single step prediction: ###Code class OneStep(tf.keras.Model): def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0): super().__init__() self.temperature = temperature self.model = model self.chars_from_ids = chars_from_ids self.ids_from_chars = ids_from_chars # Create a mask to prevent "" or "[UNK]" from being generated. skip_ids = self.ids_from_chars(['', '[UNK]'])[:, None] sparse_mask = tf.SparseTensor( # Put a -inf at each bad index. values=[-float('inf')]*len(skip_ids), indices=skip_ids, # Match the shape to the vocabulary dense_shape=[len(ids_from_chars.get_vocabulary())]) self.prediction_mask = tf.sparse.to_dense(sparse_mask) @tf.function def generate_one_step(self, inputs, states=None): # Convert strings to token IDs. input_chars = tf.strings.unicode_split(inputs, 'UTF-8') input_ids = self.ids_from_chars(input_chars).to_tensor() # Run the model. # predicted_logits.shape is [batch, char, next_char_logits] predicted_logits, states = self.model(inputs=input_ids, states=states, return_state=True) # Only use the last prediction. predicted_logits = predicted_logits[:, -1, :] predicted_logits = predicted_logits/self.temperature # Apply the prediction mask: prevent "" or "[UNK]" from being generated. predicted_logits = predicted_logits + self.prediction_mask # Sample the output logits to generate token IDs. predicted_ids = tf.random.categorical(predicted_logits, num_samples=1) predicted_ids = tf.squeeze(predicted_ids, axis=-1) # Convert from token ids to characters predicted_chars = self.chars_from_ids(predicted_ids) # Return the characters and model state. return predicted_chars, states one_step_model = OneStep(model, chars_from_ids, ids_from_chars) ###Output _____no_output_____ ###Markdown Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result[0].numpy().decode('utf-8'), '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown The easiest thing you can do to improve the results is to train it for longer (try `EPOCHS = 30`).You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions. If you want the model to generate text *faster* the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result, '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown Export the generatorThis single-step model can easily be [saved and restored](https://www.tensorflow.org/guide/saved_model), allowing you to use it anywhere a `tf.saved_model` is accepted. ###Code tf.saved_model.save(one_step_model, 'one_step') one_step_reloaded = tf.saved_model.load('one_step') states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(100): next_char, states = one_step_reloaded.generate_one_step(next_char, states=states) result.append(next_char) print(tf.strings.join(result)[0].numpy().decode("utf-8")) ###Output _____no_output_____ ###Markdown Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.It uses teacher-forcing which prevents bad predictions from being fed back to the model, so the model never learns to recover from mistakes.So now that you've seen how to run the model manually next you'll implement the training loop. This gives a starting point if, for example, you want to implement _curriculum learning_ to help stabilize the model's open-loop output.The most important part of a custom training loop is the train step function.Use `tf.GradientTape` to track the gradients. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The basic procedure is:1. Execute the model and calculate the loss under a `tf.GradientTape`.2. Calculate the updates and apply them to the model using the optimizer. ###Code class CustomTraining(MyModel): @tf.function def train_step(self, inputs): inputs, labels = inputs with tf.GradientTape() as tape: predictions = self(inputs, training=True) loss = self.loss(labels, predictions) grads = tape.gradient(loss, model.trainable_variables) self.optimizer.apply_gradients(zip(grads, model.trainable_variables)) return {'loss': loss} ###Output _____no_output_____ ###Markdown The above implementation of the `train_step` method follows [Keras' `train_step` conventions](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This is optional, but it allows you to change the behavior of the train step and still use keras' `Model.compile` and `Model.fit` methods. ###Code model = CustomTraining( vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) model.compile(optimizer = tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)) model.fit(dataset, epochs=1) ###Output _____no_output_____ ###Markdown Or if you need more control, you can write your own complete custom training loop: ###Code EPOCHS = 10 mean = tf.metrics.Mean() for epoch in range(EPOCHS): start = time.time() mean.reset_states() for (batch_n, (inp, target)) in enumerate(dataset): logs = model.train_step([inp, target]) mean.update_state(logs['loss']) if batch_n % 50 == 0: template = f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}" print(template) # saving (checkpoint) the model every 5 epochs if (epoch + 1) % 5 == 0: model.save_weights(checkpoint_prefix.format(epoch=epoch)) print() print(f'Epoch {epoch+1} Loss: {mean.result().numpy():.4f}') print(f'Time taken for 1 epoch {time.time() - start:.2f} sec') print("_"*80) model.save_weights(checkpoint_prefix.format(epoch=epoch)) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Text generation with an RNN View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware accelerator > GPU*.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/guide/keras/sequential_model) and [eager execution](https://www.tensorflow.org/guide/eager). The following is the sample output when the model in this tutorial trained for 30 epochs, and started with the prompt "Q": QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills m While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries ###Code import tensorflow as tf from tensorflow.keras.layers.experimental import preprocessing import numpy as np import os import time ###Output _____no_output_____ ###Markdown Download the Shakespeare datasetChange the following line to run this code on your own data. ###Code path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt') ###Output _____no_output_____ ###Markdown Read the dataFirst, look in the text: ###Code # Read, then decode for py2 compat. text = open(path_to_file, 'rb').read().decode(encoding='utf-8') # length of text is the number of characters in it print(f'Length of text: {len(text)} characters') # Take a look at the first 250 characters in text print(text[:250]) # The unique characters in the file vocab = sorted(set(text)) print(f'{len(vocab)} unique characters') ###Output _____no_output_____ ###Markdown Process the text Vectorize the textBefore training, you need to convert the strings to a numerical representation. The `preprocessing.StringLookup` layer can convert each character into a numeric ID. It just needs the text to be split into tokens first. ###Code example_texts = ['abcdefg', 'xyz'] chars = tf.strings.unicode_split(example_texts, input_encoding='UTF-8') chars ###Output _____no_output_____ ###Markdown Now create the `preprocessing.StringLookup` layer: ###Code ids_from_chars = preprocessing.StringLookup( vocabulary=list(vocab), mask_token=None) ###Output _____no_output_____ ###Markdown It converts form tokens to character IDs: ###Code ids = ids_from_chars(chars) ids ###Output _____no_output_____ ###Markdown Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use `preprocessing.StringLookup(..., invert=True)`. Note: Here instead of passing the original vocabulary generated with `sorted(set(text))` use the `get_vocabulary()` method of the `preprocessing.StringLookup` layer so that the `[UNK]` tokens is set the same way. ###Code chars_from_ids = tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=ids_from_chars.get_vocabulary(), invert=True, mask_token=None) ###Output _____no_output_____ ###Markdown This layer recovers the characters from the vectors of IDs, and returns them as a `tf.RaggedTensor` of characters: ###Code chars = chars_from_ids(ids) chars ###Output _____no_output_____ ###Markdown You can `tf.strings.reduce_join` to join the characters back into strings. ###Code tf.strings.reduce_join(chars, axis=-1).numpy() def text_from_ids(ids): return tf.strings.reduce_join(chars_from_ids(ids), axis=-1) ###Output _____no_output_____ ###Markdown The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text.For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices. ###Code all_ids = ids_from_chars(tf.strings.unicode_split(text, 'UTF-8')) all_ids ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids) for ids in ids_dataset.take(10): print(chars_from_ids(ids).numpy().decode('utf-8')) seq_length = 100 examples_per_epoch = len(text)//(seq_length+1) ###Output _____no_output_____ ###Markdown The `batch` method lets you easily convert these individual characters to sequences of the desired size. ###Code sequences = ids_dataset.batch(seq_length+1, drop_remainder=True) for seq in sequences.take(1): print(chars_from_ids(seq)) ###Output _____no_output_____ ###Markdown It's easier to see what this is doing if you join the tokens back into strings: ###Code for seq in sequences.take(5): print(text_from_ids(seq).numpy()) ###Output _____no_output_____ ###Markdown For training you'll need a dataset of `(input, label)` pairs. Where `input` and `label` are sequences. At each time step the input is the current character and the label is the next character. Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep: ###Code def split_input_target(sequence): input_text = sequence[:-1] target_text = sequence[1:] return input_text, target_text split_input_target(list("Tensorflow")) dataset = sequences.map(split_input_target) for input_example, target_example in dataset.take(1): print("Input :", text_from_ids(input_example).numpy()) print("Target:", text_from_ids(target_example).numpy()) ###Output _____no_output_____ ###Markdown Create training batchesYou used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches. ###Code # Batch size BATCH_SIZE = 64 # Buffer size to shuffle the dataset # (TF data is designed to work with possibly infinite sequences, # so it doesn't attempt to shuffle the entire sequence in memory. Instead, # it maintains a buffer in which it shuffles elements). BUFFER_SIZE = 10000 dataset = ( dataset .shuffle(BUFFER_SIZE) .batch(BATCH_SIZE, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) dataset ###Output _____no_output_____ ###Markdown Build The Model This section defines the model as a `keras.Model` subclass (For details see [Making new Layers and Models via subclassing](https://www.tensorflow.org/guide/keras/custom_layers_and_models)). This model has three layers:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map each character-ID to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use an LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs. It outputs one logit for each character in the vocabulary. These are the log-likelihood of each character according to the model. ###Code # Length of the vocabulary in chars vocab_size = len(vocab) # The embedding dimension embedding_dim = 256 # Number of RNN units rnn_units = 1024 class MyModel(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, rnn_units): super().__init__(self) self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(rnn_units, return_sequences=True, return_state=True) self.dense = tf.keras.layers.Dense(vocab_size) def call(self, inputs, states=None, return_state=False, training=False): x = inputs x = self.embedding(x, training=training) if states is None: states = self.gru.get_initial_state(x) x, states = self.gru(x, initial_state=states, training=training) x = self.dense(x, training=training) if return_state: return x, states else: return x model = MyModel( # Be sure the vocabulary size matches the `StringLookup` layers. vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) ###Output _____no_output_____ ###Markdown For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:![A drawing of the data passing through the model](https://github.com/tensorflow/text/blob/master/docs/tutorials/images/text_generation_training.png?raw=1) Note: For training you could use a `keras.Sequential` model here. To generate text later you'll need to manage the RNN's internal state. It's simpler to include the state input and output options upfront, than it is to rearrange the model architecture later. For more details see the [Keras RNN guide](https://www.tensorflow.org/guide/keras/rnnrnn_state_reuse). Try the modelNow run the model to see that it behaves as expected.First check the shape of the output: ###Code for input_example_batch, target_example_batch in dataset.take(1): example_batch_predictions = model(input_example_batch) print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)") ###Output _____no_output_____ ###Markdown In the above example the sequence length of the input is `100` but the model can be run on inputs of any length: ###Code model.summary() ###Output _____no_output_____ ###Markdown To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch: ###Code sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1) sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy() ###Output _____no_output_____ ###Markdown This gives us, at each timestep, a prediction of the next character index: ###Code sampled_indices ###Output _____no_output_____ ###Markdown Decode these to see the text predicted by this untrained model: ###Code print("Input:\n", text_from_ids(input_example_batch[0]).numpy()) print() print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy()) ###Output _____no_output_____ ###Markdown Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_categorical_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions.Because your model returns logits, you need to set the `from_logits` flag. ###Code loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True) example_batch_loss = loss(target_example_batch, example_batch_predictions) mean_loss = example_batch_loss.numpy().mean() print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)") print("Mean loss: ", mean_loss) ###Output _____no_output_____ ###Markdown A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized: ###Code tf.exp(mean_loss).numpy() ###Output _____no_output_____ ###Markdown Configure the training procedure using the `tf.keras.Model.compile` method. Use `tf.keras.optimizers.Adam` with default arguments and the loss function. ###Code model.compile(optimizer='adam', loss=loss) ###Output _____no_output_____ ###Markdown Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training: ###Code # Directory where the checkpoints will be saved checkpoint_dir = './training_checkpoints' # Name of the checkpoint files checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_prefix, save_weights_only=True) ###Output _____no_output_____ ###Markdown Execute the training To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training. ###Code EPOCHS = 20 history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback]) ###Output _____no_output_____ ###Markdown Generate text The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.![To generate text the model's output is fed back to the input](https://github.com/tensorflow/text/blob/master/docs/tutorials/images/text_generation_sampling.png?raw=1)Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text. The following makes a single step prediction: ###Code class OneStep(tf.keras.Model): def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0): super().__init__() self.temperature = temperature self.model = model self.chars_from_ids = chars_from_ids self.ids_from_chars = ids_from_chars # Create a mask to prevent "[UNK]" from being generated. skip_ids = self.ids_from_chars(['[UNK]'])[:, None] sparse_mask = tf.SparseTensor( # Put a -inf at each bad index. values=[-float('inf')]*len(skip_ids), indices=skip_ids, # Match the shape to the vocabulary dense_shape=[len(ids_from_chars.get_vocabulary())]) self.prediction_mask = tf.sparse.to_dense(sparse_mask) @tf.function def generate_one_step(self, inputs, states=None): # Convert strings to token IDs. input_chars = tf.strings.unicode_split(inputs, 'UTF-8') input_ids = self.ids_from_chars(input_chars).to_tensor() # Run the model. # predicted_logits.shape is [batch, char, next_char_logits] predicted_logits, states = self.model(inputs=input_ids, states=states, return_state=True) # Only use the last prediction. predicted_logits = predicted_logits[:, -1, :] predicted_logits = predicted_logits/self.temperature # Apply the prediction mask: prevent "[UNK]" from being generated. predicted_logits = predicted_logits + self.prediction_mask # Sample the output logits to generate token IDs. predicted_ids = tf.random.categorical(predicted_logits, num_samples=1) predicted_ids = tf.squeeze(predicted_ids, axis=-1) # Convert from token ids to characters predicted_chars = self.chars_from_ids(predicted_ids) # Return the characters and model state. return predicted_chars, states one_step_model = OneStep(model, chars_from_ids, ids_from_chars) ###Output _____no_output_____ ###Markdown Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result[0].numpy().decode('utf-8'), '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown The easiest thing you can do to improve the results is to train it for longer (try `EPOCHS = 30`).You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions. If you want the model to generate text *faster* the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result, '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown Export the generatorThis single-step model can easily be [saved and restored](https://www.tensorflow.org/guide/saved_model), allowing you to use it anywhere a `tf.saved_model` is accepted. ###Code tf.saved_model.save(one_step_model, 'one_step') one_step_reloaded = tf.saved_model.load('one_step') states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(100): next_char, states = one_step_reloaded.generate_one_step(next_char, states=states) result.append(next_char) print(tf.strings.join(result)[0].numpy().decode("utf-8")) ###Output _____no_output_____ ###Markdown Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.It uses teacher-forcing which prevents bad predictions from being fed back to the model, so the model never learns to recover from mistakes.So now that you've seen how to run the model manually next you'll implement the training loop. This gives a starting point if, for example, you want to implement _curriculum learning_ to help stabilize the model's open-loop output.The most important part of a custom training loop is the train step function.Use `tf.GradientTape` to track the gradients. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The basic procedure is:1. Execute the model and calculate the loss under a `tf.GradientTape`.2. Calculate the updates and apply them to the model using the optimizer. ###Code class CustomTraining(MyModel): @tf.function def train_step(self, inputs): inputs, labels = inputs with tf.GradientTape() as tape: predictions = self(inputs, training=True) loss = self.loss(labels, predictions) grads = tape.gradient(loss, model.trainable_variables) self.optimizer.apply_gradients(zip(grads, model.trainable_variables)) return {'loss': loss} ###Output _____no_output_____ ###Markdown The above implementation of the `train_step` method follows [Keras' `train_step` conventions](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This is optional, but it allows you to change the behavior of the train step and still use keras' `Model.compile` and `Model.fit` methods. ###Code model = CustomTraining( vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) model.compile(optimizer = tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)) model.fit(dataset, epochs=1) ###Output _____no_output_____ ###Markdown Or if you need more control, you can write your own complete custom training loop: ###Code EPOCHS = 10 mean = tf.metrics.Mean() for epoch in range(EPOCHS): start = time.time() mean.reset_states() for (batch_n, (inp, target)) in enumerate(dataset): logs = model.train_step([inp, target]) mean.update_state(logs['loss']) if batch_n % 50 == 0: template = f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}" print(template) # saving (checkpoint) the model every 5 epochs if (epoch + 1) % 5 == 0: model.save_weights(checkpoint_prefix.format(epoch=epoch)) print() print(f'Epoch {epoch+1} Loss: {mean.result().numpy():.4f}') print(f'Time taken for 1 epoch {time.time() - start:.2f} sec') print("_"*80) model.save_weights(checkpoint_prefix.format(epoch=epoch)) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Text generation with an RNN View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware accelerator > GPU*.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/guide/keras/sequential_model) and [eager execution](https://www.tensorflow.org/guide/eager). The following is the sample output when the model in this tutorial trained for 30 epochs, and started with the prompt "Q": QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills m While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries ###Code import tensorflow as tf import numpy as np import os import time ###Output _____no_output_____ ###Markdown Download the Shakespeare datasetChange the following line to run this code on your own data. ###Code path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt') ###Output _____no_output_____ ###Markdown Read the dataFirst, look in the text: ###Code # Read, then decode for py2 compat. text = open(path_to_file, 'rb').read().decode(encoding='utf-8') # length of text is the number of characters in it print(f'Length of text: {len(text)} characters') # Take a look at the first 250 characters in text print(text[:250]) # The unique characters in the file vocab = sorted(set(text)) print(f'{len(vocab)} unique characters') ###Output _____no_output_____ ###Markdown Process the text Vectorize the textBefore training, you need to convert the strings to a numerical representation. The `tf.keras.layers.StringLookup` layer can convert each character into a numeric ID. It just needs the text to be split into tokens first. ###Code example_texts = ['abcdefg', 'xyz'] chars = tf.strings.unicode_split(example_texts, input_encoding='UTF-8') chars ###Output _____no_output_____ ###Markdown Now create the `tf.keras.layers.StringLookup` layer: ###Code ids_from_chars = tf.keras.layers.StringLookup( vocabulary=list(vocab), mask_token=None) ###Output _____no_output_____ ###Markdown It converts from tokens to character IDs: ###Code ids = ids_from_chars(chars) ids ###Output _____no_output_____ ###Markdown Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use `tf.keras.layers.StringLookup(..., invert=True)`. Note: Here instead of passing the original vocabulary generated with `sorted(set(text))` use the `get_vocabulary()` method of the `tf.keras.layers.StringLookup` layer so that the `[UNK]` tokens is set the same way. ###Code chars_from_ids = tf.keras.layers.StringLookup( vocabulary=ids_from_chars.get_vocabulary(), invert=True, mask_token=None) ###Output _____no_output_____ ###Markdown This layer recovers the characters from the vectors of IDs, and returns them as a `tf.RaggedTensor` of characters: ###Code chars = chars_from_ids(ids) chars ###Output _____no_output_____ ###Markdown You can `tf.strings.reduce_join` to join the characters back into strings. ###Code tf.strings.reduce_join(chars, axis=-1).numpy() def text_from_ids(ids): return tf.strings.reduce_join(chars_from_ids(ids), axis=-1) ###Output _____no_output_____ ###Markdown The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text.For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices. ###Code all_ids = ids_from_chars(tf.strings.unicode_split(text, 'UTF-8')) all_ids ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids) for ids in ids_dataset.take(10): print(chars_from_ids(ids).numpy().decode('utf-8')) seq_length = 100 ###Output _____no_output_____ ###Markdown The `batch` method lets you easily convert these individual characters to sequences of the desired size. ###Code sequences = ids_dataset.batch(seq_length+1, drop_remainder=True) for seq in sequences.take(1): print(chars_from_ids(seq)) ###Output _____no_output_____ ###Markdown It's easier to see what this is doing if you join the tokens back into strings: ###Code for seq in sequences.take(5): print(text_from_ids(seq).numpy()) ###Output _____no_output_____ ###Markdown For training you'll need a dataset of `(input, label)` pairs. Where `input` and `label` are sequences. At each time step the input is the current character and the label is the next character. Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep: ###Code def split_input_target(sequence): input_text = sequence[:-1] target_text = sequence[1:] return input_text, target_text split_input_target(list("Tensorflow")) dataset = sequences.map(split_input_target) for input_example, target_example in dataset.take(1): print("Input :", text_from_ids(input_example).numpy()) print("Target:", text_from_ids(target_example).numpy()) ###Output _____no_output_____ ###Markdown Create training batchesYou used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches. ###Code # Batch size BATCH_SIZE = 64 # Buffer size to shuffle the dataset # (TF data is designed to work with possibly infinite sequences, # so it doesn't attempt to shuffle the entire sequence in memory. Instead, # it maintains a buffer in which it shuffles elements). BUFFER_SIZE = 10000 dataset = ( dataset .shuffle(BUFFER_SIZE) .batch(BATCH_SIZE, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) dataset ###Output _____no_output_____ ###Markdown Build The Model This section defines the model as a `keras.Model` subclass (For details see [Making new Layers and Models via subclassing](https://www.tensorflow.org/guide/keras/custom_layers_and_models)). This model has three layers:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map each character-ID to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use an LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs. It outputs one logit for each character in the vocabulary. These are the log-likelihood of each character according to the model. ###Code # Length of the vocabulary in StringLookup Layer vocab_size = len(ids_from_chars.get_vocabulary()) # The embedding dimension embedding_dim = 256 # Number of RNN units rnn_units = 1024 class MyModel(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, rnn_units): super().__init__(self) self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(rnn_units, return_sequences=True, return_state=True) self.dense = tf.keras.layers.Dense(vocab_size) def call(self, inputs, states=None, return_state=False, training=False): x = inputs x = self.embedding(x, training=training) if states is None: states = self.gru.get_initial_state(x) x, states = self.gru(x, initial_state=states, training=training) x = self.dense(x, training=training) if return_state: return x, states else: return x model = MyModel( vocab_size=vocab_size, embedding_dim=embedding_dim, rnn_units=rnn_units) ###Output _____no_output_____ ###Markdown For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:![A drawing of the data passing through the model](images/text_generation_training.png) Note: For training you could use a `keras.Sequential` model here. To generate text later you'll need to manage the RNN's internal state. It's simpler to include the state input and output options upfront, than it is to rearrange the model architecture later. For more details see the [Keras RNN guide](https://www.tensorflow.org/guide/keras/rnnrnn_state_reuse). Try the modelNow run the model to see that it behaves as expected.First check the shape of the output: ###Code for input_example_batch, target_example_batch in dataset.take(1): example_batch_predictions = model(input_example_batch) print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)") ###Output _____no_output_____ ###Markdown In the above example the sequence length of the input is `100` but the model can be run on inputs of any length: ###Code model.summary() ###Output _____no_output_____ ###Markdown To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch: ###Code sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1) sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy() ###Output _____no_output_____ ###Markdown This gives us, at each timestep, a prediction of the next character index: ###Code sampled_indices ###Output _____no_output_____ ###Markdown Decode these to see the text predicted by this untrained model: ###Code print("Input:\n", text_from_ids(input_example_batch[0]).numpy()) print() print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy()) ###Output _____no_output_____ ###Markdown Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_categorical_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions.Because your model returns logits, you need to set the `from_logits` flag. ###Code loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True) example_batch_mean_loss = loss(target_example_batch, example_batch_predictions) print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)") print("Mean loss: ", example_batch_mean_loss) ###Output _____no_output_____ ###Markdown A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized: ###Code tf.exp(example_batch_mean_loss).numpy() ###Output _____no_output_____ ###Markdown Configure the training procedure using the `tf.keras.Model.compile` method. Use `tf.keras.optimizers.Adam` with default arguments and the loss function. ###Code model.compile(optimizer='adam', loss=loss) ###Output _____no_output_____ ###Markdown Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training: ###Code # Directory where the checkpoints will be saved checkpoint_dir = './training_checkpoints' # Name of the checkpoint files checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_prefix, save_weights_only=True) ###Output _____no_output_____ ###Markdown Execute the training To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training. ###Code EPOCHS = 20 history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback]) ###Output _____no_output_____ ###Markdown Generate text The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.![To generate text the model's output is fed back to the input](images/text_generation_sampling.png)Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text. The following makes a single step prediction: ###Code class OneStep(tf.keras.Model): def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0): super().__init__() self.temperature = temperature self.model = model self.chars_from_ids = chars_from_ids self.ids_from_chars = ids_from_chars # Create a mask to prevent "[UNK]" from being generated. skip_ids = self.ids_from_chars(['[UNK]'])[:, None] sparse_mask = tf.SparseTensor( # Put a -inf at each bad index. values=[-float('inf')]*len(skip_ids), indices=skip_ids, # Match the shape to the vocabulary dense_shape=[len(ids_from_chars.get_vocabulary())]) self.prediction_mask = tf.sparse.to_dense(sparse_mask) @tf.function def generate_one_step(self, inputs, states=None): # Convert strings to token IDs. input_chars = tf.strings.unicode_split(inputs, 'UTF-8') input_ids = self.ids_from_chars(input_chars).to_tensor() # Run the model. # predicted_logits.shape is [batch, char, next_char_logits] predicted_logits, states = self.model(inputs=input_ids, states=states, return_state=True) # Only use the last prediction. predicted_logits = predicted_logits[:, -1, :] predicted_logits = predicted_logits/self.temperature # Apply the prediction mask: prevent "[UNK]" from being generated. predicted_logits = predicted_logits + self.prediction_mask # Sample the output logits to generate token IDs. predicted_ids = tf.random.categorical(predicted_logits, num_samples=1) predicted_ids = tf.squeeze(predicted_ids, axis=-1) # Convert from token ids to characters predicted_chars = self.chars_from_ids(predicted_ids) # Return the characters and model state. return predicted_chars, states one_step_model = OneStep(model, chars_from_ids, ids_from_chars) ###Output _____no_output_____ ###Markdown Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result[0].numpy().decode('utf-8'), '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown The easiest thing you can do to improve the results is to train it for longer (try `EPOCHS = 30`).You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions. If you want the model to generate text *faster* the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result, '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown Export the generatorThis single-step model can easily be [saved and restored](https://www.tensorflow.org/guide/saved_model), allowing you to use it anywhere a `tf.saved_model` is accepted. ###Code tf.saved_model.save(one_step_model, 'one_step') one_step_reloaded = tf.saved_model.load('one_step') states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(100): next_char, states = one_step_reloaded.generate_one_step(next_char, states=states) result.append(next_char) print(tf.strings.join(result)[0].numpy().decode("utf-8")) ###Output _____no_output_____ ###Markdown Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.It uses teacher-forcing which prevents bad predictions from being fed back to the model, so the model never learns to recover from mistakes.So now that you've seen how to run the model manually next you'll implement the training loop. This gives a starting point if, for example, you want to implement _curriculum learning_ to help stabilize the model's open-loop output.The most important part of a custom training loop is the train step function.Use `tf.GradientTape` to track the gradients. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The basic procedure is:1. Execute the model and calculate the loss under a `tf.GradientTape`.2. Calculate the updates and apply them to the model using the optimizer. ###Code class CustomTraining(MyModel): @tf.function def train_step(self, inputs): inputs, labels = inputs with tf.GradientTape() as tape: predictions = self(inputs, training=True) loss = self.loss(labels, predictions) grads = tape.gradient(loss, model.trainable_variables) self.optimizer.apply_gradients(zip(grads, model.trainable_variables)) return {'loss': loss} ###Output _____no_output_____ ###Markdown The above implementation of the `train_step` method follows [Keras' `train_step` conventions](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This is optional, but it allows you to change the behavior of the train step and still use keras' `Model.compile` and `Model.fit` methods. ###Code model = CustomTraining( vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) model.compile(optimizer = tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)) model.fit(dataset, epochs=1) ###Output _____no_output_____ ###Markdown Or if you need more control, you can write your own complete custom training loop: ###Code EPOCHS = 10 mean = tf.metrics.Mean() for epoch in range(EPOCHS): start = time.time() mean.reset_states() for (batch_n, (inp, target)) in enumerate(dataset): logs = model.train_step([inp, target]) mean.update_state(logs['loss']) if batch_n % 50 == 0: template = f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}" print(template) # saving (checkpoint) the model every 5 epochs if (epoch + 1) % 5 == 0: model.save_weights(checkpoint_prefix.format(epoch=epoch)) print() print(f'Epoch {epoch+1} Loss: {mean.result().numpy():.4f}') print(f'Time taken for 1 epoch {time.time() - start:.2f} sec') print("_"*80) model.save_weights(checkpoint_prefix.format(epoch=epoch)) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Text generation with an RNN View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware accelerator > GPU*.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/guide/keras/sequential_model) and [eager execution](https://www.tensorflow.org/guide/eager). The following is the sample output when the model in this tutorial trained for 30 epochs, and started with the prompt "Q": QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills m While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other librariesmy experiment ###Code import tensorflow as tf import numpy as np import os import time ###Output _____no_output_____ ###Markdown Download the Shakespeare datasetChange the following line to run this code on your own data. ###Code path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt') ###Output Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt 1122304/1115394 [==============================] - 0s 0us/step 1130496/1115394 [==============================] - 0s 0us/step ###Markdown Read the dataFirst, look in the text: ###Code # Read, then decode for py2 compat. text = open(path_to_file, 'rb').read().decode(encoding='utf-8') # length of text is the number of characters in it print(f'Length of text: {len(text)} characters') ###Output _____no_output_____ ###Markdown This is my own data ###Code text = open('friends-etitles.txt', 'rb').read().decode(encoding='utf-8') print(f'Length of text: {len(text)} characters') # Take a look at the first 250 characters in text print(text[:250]) # The unique characters in the file vocab = sorted(set(text)) print(f'{len(vocab)} unique characters') ###Output 59 unique characters ###Markdown Process the text Vectorize the textBefore training, you need to convert the strings to a numerical representation. The `tf.keras.layers.StringLookup` layer can convert each character into a numeric ID. It just needs the text to be split into tokens first. ###Code example_texts = ['abcdefg', 'xyz'] chars = tf.strings.unicode_split(example_texts, input_encoding='UTF-8') chars ###Output _____no_output_____ ###Markdown Now create the `tf.keras.layers.StringLookup` layer: ###Code ids_from_chars = tf.keras.layers.StringLookup( vocabulary=list(vocab), mask_token=None) ###Output _____no_output_____ ###Markdown It converts form tokens to character IDs: ###Code ids = ids_from_chars(chars) ids ###Output _____no_output_____ ###Markdown Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use `tf.keras.layers.StringLookup(..., invert=True)`. Note: Here instead of passing the original vocabulary generated with `sorted(set(text))` use the `get_vocabulary()` method of the `tf.keras.layers.StringLookup` layer so that the `[UNK]` tokens is set the same way. ###Code chars_from_ids = tf.keras.layers.StringLookup( vocabulary=ids_from_chars.get_vocabulary(), invert=True, mask_token=None) ###Output _____no_output_____ ###Markdown This layer recovers the characters from the vectors of IDs, and returns them as a `tf.RaggedTensor` of characters: ###Code chars = chars_from_ids(ids) chars ###Output _____no_output_____ ###Markdown You can `tf.strings.reduce_join` to join the characters back into strings. ###Code tf.strings.reduce_join(chars, axis=-1).numpy() def text_from_ids(ids): return tf.strings.reduce_join(chars_from_ids(ids), axis=-1) ###Output _____no_output_____ ###Markdown The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text.For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices. ###Code all_ids = ids_from_chars(tf.strings.unicode_split(text, 'UTF-8')) all_ids ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids) for ids in ids_dataset.take(10): print(chars_from_ids(ids).numpy().decode('utf-8')) seq_length = 100 examples_per_epoch = len(text)//(seq_length+1) ###Output _____no_output_____ ###Markdown The `batch` method lets you easily convert these individual characters to sequences of the desired size. ###Code sequences = ids_dataset.batch(seq_length+1, drop_remainder=True) for seq in sequences.take(1): print(chars_from_ids(seq)) ###Output tf.Tensor( [b'E' b'p' b'i' b's' b'o' b'd' b'e' b'_' b'T' b'i' b't' b'l' b'e' b'\n' b'T' b'h' b'e' b' ' b'O' b'n' b'e' b' ' b'W' b'h' b'e' b'r' b'e' b' ' b'M' b'o' b'n' b'i' b'c' b'a' b' ' b'G' b'e' b't' b's' b' ' b'a' b' ' b'R' b'o' b'o' b'm' b'm' b'a' b't' b'e' b':' b' ' b'T' b'h' b'e' b' ' b'P' b'i' b'l' b'o' b't' b'\n' b'T' b'h' b'e' b' ' b'O' b'n' b'e' b' ' b'w' b'i' b't' b'h' b' ' b't' b'h' b'e' b' ' b'S' b'o' b'n' b'o' b'g' b'r' b'a' b'm' b' ' b'a' b't' b' ' b't' b'h' b'e' b' ' b'E' b'n' b'd' b'\n' b'T' b'h'], shape=(101,), dtype=string) ###Markdown It's easier to see what this is doing if you join the tokens back into strings: ###Code for seq in sequences.take(5): print(text_from_ids(seq).numpy()) ###Output b'Episode_Title\nThe One Where Monica Gets a Roommate: The Pilot\nThe One with the Sonogram at the End\nTh' b'e One with the Thumb\nThe One with George Stephanopoulos\nThe One with the East German Laundry Detergen' b't\nThe One with the Butt\nThe One with the Blackout\nThe One Where Nana Dies Twice\nThe One Where Underdo' b'g Gets Away\nThe One with the Monkey\nThe One with Mrs. Bing\nThe One with the Dozen Lasagnas\nThe One wi' b'th the Boobies\nThe One with the Candy Hearts\nThe One with the Stoned Guy\nThe One with Two Parts: Part' ###Markdown For training you'll need a dataset of `(input, label)` pairs. Where `input` and `label` are sequences. At each time step the input is the current character and the label is the next character. Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep: ###Code def split_input_target(sequence): input_text = sequence[:-1] target_text = sequence[1:] return input_text, target_text split_input_target(list("Tensorflow")) dataset = sequences.map(split_input_target) for input_example, target_example in dataset.take(1): print("Input :", text_from_ids(input_example).numpy()) print("Target:", text_from_ids(target_example).numpy()) ###Output Input : b'Episode_Title\nThe One Where Monica Gets a Roommate: The Pilot\nThe One with the Sonogram at the End\nT' Target: b'pisode_Title\nThe One Where Monica Gets a Roommate: The Pilot\nThe One with the Sonogram at the End\nTh' ###Markdown Create training batchesYou used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches. ###Code # Batch size BATCH_SIZE = 64 # Buffer size to shuffle the dataset # (TF data is designed to work with possibly infinite sequences, # so it doesn't attempt to shuffle the entire sequence in memory. Instead, # it maintains a buffer in which it shuffles elements). BUFFER_SIZE = 10000 dataset = ( dataset .shuffle(BUFFER_SIZE) .batch(BATCH_SIZE, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) dataset ###Output _____no_output_____ ###Markdown Build The Model This section defines the model as a `keras.Model` subclass (For details see [Making new Layers and Models via subclassing](https://www.tensorflow.org/guide/keras/custom_layers_and_models)). This model has three layers:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map each character-ID to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use an LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs. It outputs one logit for each character in the vocabulary. These are the log-likelihood of each character according to the model. ###Code # Length of the vocabulary in chars vocab_size = len(vocab) # The embedding dimension embedding_dim = 256 # Number of RNN units rnn_units = 1024 class MyModel(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, rnn_units): super().__init__(self) self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(rnn_units, return_sequences=True, return_state=True) self.dense = tf.keras.layers.Dense(vocab_size) def call(self, inputs, states=None, return_state=False, training=False): x = inputs x = self.embedding(x, training=training) if states is None: states = self.gru.get_initial_state(x) x, states = self.gru(x, initial_state=states, training=training) x = self.dense(x, training=training) if return_state: return x, states else: return x model = MyModel( # Be sure the vocabulary size matches the `StringLookup` layers. vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) ###Output _____no_output_____ ###Markdown For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:![A drawing of the data passing through the model](https://github.com/tensorflow/text/blob/master/docs/tutorials/images/text_generation_training.png?raw=1) Note: For training you could use a `keras.Sequential` model here. To generate text later you'll need to manage the RNN's internal state. It's simpler to include the state input and output options upfront, than it is to rearrange the model architecture later. For more details see the [Keras RNN guide](https://www.tensorflow.org/guide/keras/rnnrnn_state_reuse). Try the modelNow run the model to see that it behaves as expected.First check the shape of the output: ###Code for input_example_batch, target_example_batch in dataset.take(1): example_batch_predictions = model(input_example_batch) print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)") ###Output (64, 100, 60) # (batch_size, sequence_length, vocab_size) ###Markdown In the above example the sequence length of the input is `100` but the model can be run on inputs of any length: ###Code model.summary() ###Output Model: "my_model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) multiple 15360 gru (GRU) multiple 3938304 dense (Dense) multiple 61500 ================================================================= Total params: 4,015,164 Trainable params: 4,015,164 Non-trainable params: 0 _________________________________________________________________ ###Markdown To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch: ###Code sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1) sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy() ###Output _____no_output_____ ###Markdown This gives us, at each timestep, a prediction of the next character index: ###Code sampled_indices ###Output _____no_output_____ ###Markdown Decode these to see the text predicted by this untrained model: ###Code print("Input:\n", text_from_ids(input_example_batch[0]).numpy()) print() print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy()) ###Output Input: b'with Frank Jr.\nThe One with the Flashback\nThe One with the Race Car Bed\nThe One with the Giant Pokin' Next Char Predictions: b"WzBroJLU!xKEe2\nB!dLMCCUySf:udaH!kJY[UNK]MnICrCNphslYHoUgFdYP'Y\n\nfmkgGpDgt!\nrrGImvAx.L[UNK]G CcL:HcFyhyFm!cOB" ###Markdown Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_categorical_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions.Because your model returns logits, you need to set the `from_logits` flag. ###Code loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True) example_batch_loss = loss(target_example_batch, example_batch_predictions) mean_loss = example_batch_loss.numpy().mean() print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)") print("Mean loss: ", mean_loss) ###Output Prediction shape: (64, 100, 60) # (batch_size, sequence_length, vocab_size) Mean loss: 4.0941534 ###Markdown A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized: ###Code tf.exp(mean_loss).numpy() ###Output _____no_output_____ ###Markdown Configure the training procedure using the `tf.keras.Model.compile` method. Use `tf.keras.optimizers.Adam` with default arguments and the loss function. ###Code model.compile(optimizer='adam', loss=loss) ###Output _____no_output_____ ###Markdown Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training: ###Code # Directory where the checkpoints will be saved checkpoint_dir = './training_checkpoints' # Name of the checkpoint files checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_prefix, save_weights_only=True) ###Output _____no_output_____ ###Markdown Execute the training To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training. ###Code EPOCHS = 50 history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback]) ###Output Epoch 1/50 1/1 [==============================] - 0s 324ms/step - loss: 0.2873 Epoch 2/50 1/1 [==============================] - 0s 294ms/step - loss: 0.2840 Epoch 3/50 1/1 [==============================] - 0s 321ms/step - loss: 0.2795 Epoch 4/50 1/1 [==============================] - 0s 300ms/step - loss: 0.2794 Epoch 5/50 1/1 [==============================] - 0s 260ms/step - loss: 0.2695 Epoch 6/50 1/1 [==============================] - 0s 278ms/step - loss: 0.2656 Epoch 7/50 1/1 [==============================] - 0s 270ms/step - loss: 0.2656 Epoch 8/50 1/1 [==============================] - 0s 257ms/step - loss: 0.2578 Epoch 9/50 1/1 [==============================] - 0s 294ms/step - loss: 0.2600 Epoch 10/50 1/1 [==============================] - 0s 268ms/step - loss: 0.2596 Epoch 11/50 1/1 [==============================] - 0s 287ms/step - loss: 0.2518 Epoch 12/50 1/1 [==============================] - 0s 320ms/step - loss: 0.2461 Epoch 13/50 1/1 [==============================] - 1s 734ms/step - loss: 0.2568 Epoch 14/50 1/1 [==============================] - 0s 260ms/step - loss: 0.2822 Epoch 15/50 1/1 [==============================] - 1s 628ms/step - loss: 0.2547 Epoch 16/50 1/1 [==============================] - 0s 288ms/step - loss: 0.2581 Epoch 17/50 1/1 [==============================] - 0s 320ms/step - loss: 0.2447 Epoch 18/50 1/1 [==============================] - 0s 284ms/step - loss: 0.2525 Epoch 19/50 1/1 [==============================] - 0s 311ms/step - loss: 0.2383 Epoch 20/50 1/1 [==============================] - 0s 293ms/step - loss: 0.2418 Epoch 21/50 1/1 [==============================] - 4s 4s/step - loss: 0.2350 Epoch 22/50 1/1 [==============================] - 0s 338ms/step - loss: 0.2258 Epoch 23/50 1/1 [==============================] - 0s 267ms/step - loss: 0.2280 Epoch 24/50 1/1 [==============================] - 2s 2s/step - loss: 0.2183 Epoch 25/50 1/1 [==============================] - 2s 2s/step - loss: 0.2189 Epoch 26/50 1/1 [==============================] - 0s 262ms/step - loss: 0.2147 Epoch 27/50 1/1 [==============================] - 0s 292ms/step - loss: 0.2111 Epoch 28/50 1/1 [==============================] - 1s 1s/step - loss: 0.2084 Epoch 29/50 1/1 [==============================] - 0s 267ms/step - loss: 0.2065 Epoch 30/50 1/1 [==============================] - 2s 2s/step - loss: 0.2056 Epoch 31/50 1/1 [==============================] - 0s 294ms/step - loss: 0.2073 Epoch 32/50 1/1 [==============================] - 0s 295ms/step - loss: 0.2093 Epoch 33/50 1/1 [==============================] - 0s 281ms/step - loss: 0.2188 Epoch 34/50 1/1 [==============================] - 2s 2s/step - loss: 0.1998 Epoch 35/50 1/1 [==============================] - 2s 2s/step - loss: 0.2143 Epoch 36/50 1/1 [==============================] - 1s 569ms/step - loss: 0.2093 Epoch 37/50 1/1 [==============================] - 0s 260ms/step - loss: 0.1997 Epoch 38/50 1/1 [==============================] - 1s 788ms/step - loss: 0.1945 Epoch 39/50 1/1 [==============================] - 0s 286ms/step - loss: 0.1948 Epoch 40/50 1/1 [==============================] - 0s 303ms/step - loss: 0.1908 Epoch 41/50 1/1 [==============================] - 0s 306ms/step - loss: 0.1946 Epoch 42/50 1/1 [==============================] - 0s 281ms/step - loss: 0.1833 Epoch 43/50 1/1 [==============================] - 1s 572ms/step - loss: 0.1844 Epoch 44/50 1/1 [==============================] - 1s 1s/step - loss: 0.1801 Epoch 45/50 1/1 [==============================] - 5s 5s/step - loss: 0.1840 Epoch 46/50 1/1 [==============================] - 1s 883ms/step - loss: 0.1756 Epoch 47/50 1/1 [==============================] - 0s 293ms/step - loss: 0.1744 Epoch 48/50 1/1 [==============================] - 1s 940ms/step - loss: 0.1741 Epoch 49/50 1/1 [==============================] - 0s 274ms/step - loss: 0.1702 Epoch 50/50 1/1 [==============================] - 1s 1s/step - loss: 0.1685 ###Markdown Generate text The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.![To generate text the model's output is fed back to the input](https://github.com/tensorflow/text/blob/master/docs/tutorials/images/text_generation_sampling.png?raw=1)Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text. The following makes a single step prediction: ###Code class OneStep(tf.keras.Model): def __init__(self, model, chars_from_ids, ids_from_chars, temperature=0.7): super().__init__() self.temperature = temperature self.model = model self.chars_from_ids = chars_from_ids self.ids_from_chars = ids_from_chars # Create a mask to prevent "[UNK]" from being generated. skip_ids = self.ids_from_chars(['[UNK]'])[:, None] sparse_mask = tf.SparseTensor( # Put a -inf at each bad index. values=[-float('inf')]*len(skip_ids), indices=skip_ids, # Match the shape to the vocabulary dense_shape=[len(ids_from_chars.get_vocabulary())]) self.prediction_mask = tf.sparse.to_dense(sparse_mask) @tf.function def generate_one_step(self, inputs, states=None): # Convert strings to token IDs. input_chars = tf.strings.unicode_split(inputs, 'UTF-8') input_ids = self.ids_from_chars(input_chars).to_tensor() # Run the model. # predicted_logits.shape is [batch, char, next_char_logits] predicted_logits, states = self.model(inputs=input_ids, states=states, return_state=True) # Only use the last prediction. predicted_logits = predicted_logits[:, -1, :] predicted_logits = predicted_logits/self.temperature # Apply the prediction mask: prevent "[UNK]" from being generated. predicted_logits = predicted_logits + self.prediction_mask # Sample the output logits to generate token IDs. predicted_ids = tf.random.categorical(predicted_logits, num_samples=1) predicted_ids = tf.squeeze(predicted_ids, axis=-1) # Convert from token ids to characters predicted_chars = self.chars_from_ids(predicted_ids) # Return the characters and model state. return predicted_chars, states one_step_model = OneStep(model, chars_from_ids, ids_from_chars) ###Output _____no_output_____ ###Markdown Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences. ###Code FOut = open('friends-etitles-new.txt', 'w') start = time.time() states = None next_char = tf.constant(['The One with']) result = [next_char] for n in range(20000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() FOut.write(result[0].numpy().decode('utf-8')) FOut.flush() # print(result[0].numpy().decode('utf-8'), '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output Run time: 79.27948784828186 ###Markdown The easiest thing you can do to improve the results is to train it for longer (try `EPOCHS = 30`).You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions. If you want the model to generate text *faster* the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result, '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown Export the generatorThis single-step model can easily be [saved and restored](https://www.tensorflow.org/guide/saved_model), allowing you to use it anywhere a `tf.saved_model` is accepted. ###Code tf.saved_model.save(one_step_model, 'one_step') one_step_reloaded = tf.saved_model.load('one_step') states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(100): next_char, states = one_step_reloaded.generate_one_step(next_char, states=states) result.append(next_char) print(tf.strings.join(result)[0].numpy().decode("utf-8")) ###Output _____no_output_____ ###Markdown Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.It uses teacher-forcing which prevents bad predictions from being fed back to the model, so the model never learns to recover from mistakes.So now that you've seen how to run the model manually next you'll implement the training loop. This gives a starting point if, for example, you want to implement _curriculum learning_ to help stabilize the model's open-loop output.The most important part of a custom training loop is the train step function.Use `tf.GradientTape` to track the gradients. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The basic procedure is:1. Execute the model and calculate the loss under a `tf.GradientTape`.2. Calculate the updates and apply them to the model using the optimizer. ###Code class CustomTraining(MyModel): @tf.function def train_step(self, inputs): inputs, labels = inputs with tf.GradientTape() as tape: predictions = self(inputs, training=True) loss = self.loss(labels, predictions) grads = tape.gradient(loss, model.trainable_variables) self.optimizer.apply_gradients(zip(grads, model.trainable_variables)) return {'loss': loss} ###Output _____no_output_____ ###Markdown The above implementation of the `train_step` method follows [Keras' `train_step` conventions](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This is optional, but it allows you to change the behavior of the train step and still use keras' `Model.compile` and `Model.fit` methods. ###Code model = CustomTraining( vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) model.compile(optimizer = tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)) model.fit(dataset, epochs=1) ###Output _____no_output_____ ###Markdown Or if you need more control, you can write your own complete custom training loop: ###Code EPOCHS = 10 mean = tf.metrics.Mean() for epoch in range(EPOCHS): start = time.time() mean.reset_states() for (batch_n, (inp, target)) in enumerate(dataset): logs = model.train_step([inp, target]) mean.update_state(logs['loss']) if batch_n % 50 == 0: template = f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}" print(template) # saving (checkpoint) the model every 5 epochs if (epoch + 1) % 5 == 0: model.save_weights(checkpoint_prefix.format(epoch=epoch)) print() print(f'Epoch {epoch+1} Loss: {mean.result().numpy():.4f}') print(f'Time taken for 1 epoch {time.time() - start:.2f} sec') print("_"*80) model.save_weights(checkpoint_prefix.format(epoch=epoch)) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Text generation with an RNN View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware accelerator > GPU*.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/guide/keras/sequential_model) and [eager execution](https://www.tensorflow.org/guide/eager). The following is the sample output when the model in this tutorial trained for 30 epochs, and started with the prompt "Q": QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills m While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries ###Code import tensorflow as tf import numpy as np import os import time ###Output _____no_output_____ ###Markdown Download the Shakespeare datasetChange the following line to run this code on your own data. ###Code path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt') ###Output _____no_output_____ ###Markdown Read the dataFirst, look in the text: ###Code # Read, then decode for py2 compat. text = open(path_to_file, 'rb').read().decode(encoding='utf-8') # length of text is the number of characters in it print(f'Length of text: {len(text)} characters') # Take a look at the first 250 characters in text print(text[:250]) # The unique characters in the file vocab = sorted(set(text)) print(f'{len(vocab)} unique characters') ###Output _____no_output_____ ###Markdown Process the text Vectorize the textBefore training, you need to convert the strings to a numerical representation. The `tf.keras.layers.StringLookup` layer can convert each character into a numeric ID. It just needs the text to be split into tokens first. ###Code example_texts = ['abcdefg', 'xyz'] chars = tf.strings.unicode_split(example_texts, input_encoding='UTF-8') chars ###Output _____no_output_____ ###Markdown Now create the `tf.keras.layers.StringLookup` layer: ###Code ids_from_chars = tf.keras.layers.StringLookup( vocabulary=list(vocab), mask_token=None) ###Output _____no_output_____ ###Markdown It converts form tokens to character IDs: ###Code ids = ids_from_chars(chars) ids ###Output _____no_output_____ ###Markdown Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use `tf.keras.layers.StringLookup(..., invert=True)`. Note: Here instead of passing the original vocabulary generated with `sorted(set(text))` use the `get_vocabulary()` method of the `tf.keras.layers.StringLookup` layer so that the `[UNK]` tokens is set the same way. ###Code chars_from_ids = tf.keras.layers.StringLookup( vocabulary=ids_from_chars.get_vocabulary(), invert=True, mask_token=None) ###Output _____no_output_____ ###Markdown This layer recovers the characters from the vectors of IDs, and returns them as a `tf.RaggedTensor` of characters: ###Code chars = chars_from_ids(ids) chars ###Output _____no_output_____ ###Markdown You can `tf.strings.reduce_join` to join the characters back into strings. ###Code tf.strings.reduce_join(chars, axis=-1).numpy() def text_from_ids(ids): return tf.strings.reduce_join(chars_from_ids(ids), axis=-1) ###Output _____no_output_____ ###Markdown The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text.For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices. ###Code all_ids = ids_from_chars(tf.strings.unicode_split(text, 'UTF-8')) all_ids ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids) for ids in ids_dataset.take(10): print(chars_from_ids(ids).numpy().decode('utf-8')) seq_length = 100 examples_per_epoch = len(text)//(seq_length+1) ###Output _____no_output_____ ###Markdown The `batch` method lets you easily convert these individual characters to sequences of the desired size. ###Code sequences = ids_dataset.batch(seq_length+1, drop_remainder=True) for seq in sequences.take(1): print(chars_from_ids(seq)) ###Output _____no_output_____ ###Markdown It's easier to see what this is doing if you join the tokens back into strings: ###Code for seq in sequences.take(5): print(text_from_ids(seq).numpy()) ###Output _____no_output_____ ###Markdown For training you'll need a dataset of `(input, label)` pairs. Where `input` and `label` are sequences. At each time step the input is the current character and the label is the next character. Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep: ###Code def split_input_target(sequence): input_text = sequence[:-1] target_text = sequence[1:] return input_text, target_text split_input_target(list("Tensorflow")) dataset = sequences.map(split_input_target) for input_example, target_example in dataset.take(1): print("Input :", text_from_ids(input_example).numpy()) print("Target:", text_from_ids(target_example).numpy()) ###Output _____no_output_____ ###Markdown Create training batchesYou used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches. ###Code # Batch size BATCH_SIZE = 64 # Buffer size to shuffle the dataset # (TF data is designed to work with possibly infinite sequences, # so it doesn't attempt to shuffle the entire sequence in memory. Instead, # it maintains a buffer in which it shuffles elements). BUFFER_SIZE = 10000 dataset = ( dataset .shuffle(BUFFER_SIZE) .batch(BATCH_SIZE, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) dataset ###Output _____no_output_____ ###Markdown Build The Model This section defines the model as a `keras.Model` subclass (For details see [Making new Layers and Models via subclassing](https://www.tensorflow.org/guide/keras/custom_layers_and_models)). This model has three layers:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map each character-ID to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use an LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs. It outputs one logit for each character in the vocabulary. These are the log-likelihood of each character according to the model. ###Code # Length of the vocabulary in chars vocab_size = len(vocab) # The embedding dimension embedding_dim = 256 # Number of RNN units rnn_units = 1024 class MyModel(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, rnn_units): super().__init__(self) self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(rnn_units, return_sequences=True, return_state=True) self.dense = tf.keras.layers.Dense(vocab_size) def call(self, inputs, states=None, return_state=False, training=False): x = inputs x = self.embedding(x, training=training) if states is None: states = self.gru.get_initial_state(x) x, states = self.gru(x, initial_state=states, training=training) x = self.dense(x, training=training) if return_state: return x, states else: return x model = MyModel( # Be sure the vocabulary size matches the `StringLookup` layers. vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) ###Output _____no_output_____ ###Markdown For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:![A drawing of the data passing through the model](images/text_generation_training.png) Note: For training you could use a `keras.Sequential` model here. To generate text later you'll need to manage the RNN's internal state. It's simpler to include the state input and output options upfront, than it is to rearrange the model architecture later. For more details see the [Keras RNN guide](https://www.tensorflow.org/guide/keras/rnnrnn_state_reuse). Try the modelNow run the model to see that it behaves as expected.First check the shape of the output: ###Code for input_example_batch, target_example_batch in dataset.take(1): example_batch_predictions = model(input_example_batch) print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)") ###Output _____no_output_____ ###Markdown In the above example the sequence length of the input is `100` but the model can be run on inputs of any length: ###Code model.summary() ###Output _____no_output_____ ###Markdown To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch: ###Code sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1) sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy() ###Output _____no_output_____ ###Markdown This gives us, at each timestep, a prediction of the next character index: ###Code sampled_indices ###Output _____no_output_____ ###Markdown Decode these to see the text predicted by this untrained model: ###Code print("Input:\n", text_from_ids(input_example_batch[0]).numpy()) print() print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy()) ###Output _____no_output_____ ###Markdown Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_categorical_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions.Because your model returns logits, you need to set the `from_logits` flag. ###Code loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True) example_batch_loss = loss(target_example_batch, example_batch_predictions) mean_loss = example_batch_loss.numpy().mean() print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)") print("Mean loss: ", mean_loss) ###Output _____no_output_____ ###Markdown A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized: ###Code tf.exp(mean_loss).numpy() ###Output _____no_output_____ ###Markdown Configure the training procedure using the `tf.keras.Model.compile` method. Use `tf.keras.optimizers.Adam` with default arguments and the loss function. ###Code model.compile(optimizer='adam', loss=loss) ###Output _____no_output_____ ###Markdown Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training: ###Code # Directory where the checkpoints will be saved checkpoint_dir = './training_checkpoints' # Name of the checkpoint files checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_prefix, save_weights_only=True) ###Output _____no_output_____ ###Markdown Execute the training To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training. ###Code EPOCHS = 20 history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback]) ###Output _____no_output_____ ###Markdown Generate text The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.![To generate text the model's output is fed back to the input](images/text_generation_sampling.png)Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text. The following makes a single step prediction: ###Code class OneStep(tf.keras.Model): def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0): super().__init__() self.temperature = temperature self.model = model self.chars_from_ids = chars_from_ids self.ids_from_chars = ids_from_chars # Create a mask to prevent "[UNK]" from being generated. skip_ids = self.ids_from_chars(['[UNK]'])[:, None] sparse_mask = tf.SparseTensor( # Put a -inf at each bad index. values=[-float('inf')]*len(skip_ids), indices=skip_ids, # Match the shape to the vocabulary dense_shape=[len(ids_from_chars.get_vocabulary())]) self.prediction_mask = tf.sparse.to_dense(sparse_mask) @tf.function def generate_one_step(self, inputs, states=None): # Convert strings to token IDs. input_chars = tf.strings.unicode_split(inputs, 'UTF-8') input_ids = self.ids_from_chars(input_chars).to_tensor() # Run the model. # predicted_logits.shape is [batch, char, next_char_logits] predicted_logits, states = self.model(inputs=input_ids, states=states, return_state=True) # Only use the last prediction. predicted_logits = predicted_logits[:, -1, :] predicted_logits = predicted_logits/self.temperature # Apply the prediction mask: prevent "[UNK]" from being generated. predicted_logits = predicted_logits + self.prediction_mask # Sample the output logits to generate token IDs. predicted_ids = tf.random.categorical(predicted_logits, num_samples=1) predicted_ids = tf.squeeze(predicted_ids, axis=-1) # Convert from token ids to characters predicted_chars = self.chars_from_ids(predicted_ids) # Return the characters and model state. return predicted_chars, states one_step_model = OneStep(model, chars_from_ids, ids_from_chars) ###Output _____no_output_____ ###Markdown Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result[0].numpy().decode('utf-8'), '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown The easiest thing you can do to improve the results is to train it for longer (try `EPOCHS = 30`).You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions. If you want the model to generate text *faster* the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above. ###Code start = time.time() states = None next_char = tf.constant(['ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:']) result = [next_char] for n in range(1000): next_char, states = one_step_model.generate_one_step(next_char, states=states) result.append(next_char) result = tf.strings.join(result) end = time.time() print(result, '\n\n' + '_'*80) print('\nRun time:', end - start) ###Output _____no_output_____ ###Markdown Export the generatorThis single-step model can easily be [saved and restored](https://www.tensorflow.org/guide/saved_model), allowing you to use it anywhere a `tf.saved_model` is accepted. ###Code tf.saved_model.save(one_step_model, 'one_step') one_step_reloaded = tf.saved_model.load('one_step') states = None next_char = tf.constant(['ROMEO:']) result = [next_char] for n in range(100): next_char, states = one_step_reloaded.generate_one_step(next_char, states=states) result.append(next_char) print(tf.strings.join(result)[0].numpy().decode("utf-8")) ###Output _____no_output_____ ###Markdown Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.It uses teacher-forcing which prevents bad predictions from being fed back to the model, so the model never learns to recover from mistakes.So now that you've seen how to run the model manually next you'll implement the training loop. This gives a starting point if, for example, you want to implement _curriculum learning_ to help stabilize the model's open-loop output.The most important part of a custom training loop is the train step function.Use `tf.GradientTape` to track the gradients. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The basic procedure is:1. Execute the model and calculate the loss under a `tf.GradientTape`.2. Calculate the updates and apply them to the model using the optimizer. ###Code class CustomTraining(MyModel): @tf.function def train_step(self, inputs): inputs, labels = inputs with tf.GradientTape() as tape: predictions = self(inputs, training=True) loss = self.loss(labels, predictions) grads = tape.gradient(loss, model.trainable_variables) self.optimizer.apply_gradients(zip(grads, model.trainable_variables)) return {'loss': loss} ###Output _____no_output_____ ###Markdown The above implementation of the `train_step` method follows [Keras' `train_step` conventions](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This is optional, but it allows you to change the behavior of the train step and still use keras' `Model.compile` and `Model.fit` methods. ###Code model = CustomTraining( vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units) model.compile(optimizer = tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)) model.fit(dataset, epochs=1) ###Output _____no_output_____ ###Markdown Or if you need more control, you can write your own complete custom training loop: ###Code EPOCHS = 10 mean = tf.metrics.Mean() for epoch in range(EPOCHS): start = time.time() mean.reset_states() for (batch_n, (inp, target)) in enumerate(dataset): logs = model.train_step([inp, target]) mean.update_state(logs['loss']) if batch_n % 50 == 0: template = f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}" print(template) # saving (checkpoint) the model every 5 epochs if (epoch + 1) % 5 == 0: model.save_weights(checkpoint_prefix.format(epoch=epoch)) print() print(f'Epoch {epoch+1} Loss: {mean.result().numpy():.4f}') print(f'Time taken for 1 epoch {time.time() - start:.2f} sec') print("_"*80) model.save_weights(checkpoint_prefix.format(epoch=epoch)) ###Output _____no_output_____
3-24homework/Iris.ipynb
###Markdown BP神经网络预测鸢尾花种类 1.数据预处理 ###Code #读取鸢尾花数据 import pandas as pd iris = pd.read_csv(r"dataset/iris.csv") iris # 将输出的类别修改为实数 # Iris-setosa为1 # Iris-versicolor为2 # Iris-virginica为3 iris.loc[(iris['class']=='Iris-setosa'),'class'] = 1 iris.loc[(iris['class']=='Iris-versicolor'),'class'] = 2 iris.loc[(iris['class']=='Iris-virginica'),'class'] = 3 iris # 打乱数据顺序 # frac为所需要的比例,1为全需要 iris = iris.sample(frac=1) iris # 取特征值 iris_data = iris.values iris_feature = iris_data[0:,0:4] iris_feature[0:10] len(iris_feature) ###Output _____no_output_____ ###Markdown 2.构建神经网络 ###Code # 生成大小为I*J的随机数矩阵,用于构建权重 import numpy as np def makeArray(I,J): m = [] for i in range(I): fill = np.random.random() m.append([fill]*J) return m # 结点的激活函数sigmoid import math def sigmoid(x): return 1.0 / (1.0 + math.exp(-x)) # sigmoid的导函数 def dsigmoid(x): return x * (1-x) class NN: # 三层反向传播神经网络 def __init__(self,ni,nh,no): # 定义神经网络结点个数,输入层和隐藏层增加偏置结点 self.ni = ni + 1 self.nh = nh + 1 self.no = no # 激活神经网络的所有结点(向量) self.ai = [1.0] * self.ni self.ah = [1.0] * self.nh self.ao = [1.0] * self.no # 建立权重 self.wi = makeArray(self.ni,self.nh) self.wo = makeArray(self.nh,self.no) # 正向传播 def update(self,inputs): # 激活输入层 for i in range(self.ni - 1): self.ai[i] = inputs[i] # 激活隐藏层 for j in range(self.nh): sum = 0.0 for i in range(self.ni): sum = sum + self.ai[i] * self.wi[i][j] self.ah[j] = sigmoid(sum) # 激活输出层 for k in range(self.no): sum = 0.0 for j in range(self.nh): sum = sum + self.ah[j] * self.wo[j][k] self.ao[k] = sigmoid(sum) return self.ao[:] # 反向传播 def backPropagate(self,targets,lr): # 计算输出层的误差 output_deltas = [0.0] * self.no for k in range(self.no): error = targets[k] - self.ao[k] output_deltas[k] = dsigmoid(self.ao[k]) * error # 计算隐藏层的误差 hidden_deltas = [0.0] * self.nh for j in range(self.nh): error = 0.0 for k in range(self.no): error = error + output_deltas[k] * self.wo[j][k] hidden_deltas[j] = dsigmoid(self.ah[j]) * error # 更新输出层权重 for j in range(self.nh): for k in range(self.no): change = output_deltas[k] * self.ah[j] self.wo[j][k] = self.wo[j][k] + lr * change # 更新输入层权重 for i in range(self.ni): for j in range(self.nh): change = hidden_deltas[j] * self.ai[i] self.wi[i][j] = self.wi[i][j] + lr * change # 计算误差 error = 0.0 error += 0.5 * (targets[k] - self.ao[k]) ** 2 return error def test(self,patterns): count = 0 for p in patterns: target = p[1].index(1) + 1 result = self.update(p[0]) index = result.index(max(result)) + 1 print(p[0],':',target,'->',index) # cout += (target == index) if(target == index): count = count + 1 accuracy = float(count / len(patterns)) print ('accuracy: %-.9f' % accuracy) def weights(self): print('输入层权重:') for i in range(self.ni): print(self.wi[i]) print() print('隐藏层权重:') for j in range(self.nh): print(self.wo[j]) def train(self,patterns,iterations = 1000,lr = 0.1): for i in range(iterations): error = 0.0 for p in patterns: inputs = p[0] targets = p[1] self.update(inputs) error = error + self.backPropagate(targets,lr) if i % 100 == 0: print('error: %-.9f' % error) data = [] for i in range(len(iris_feature)): ele = [] ele.append(list(iris_feature[i])) if iris_data[i][4] == 1: ele.append([1,0,0]) elif iris_data[i][4] == 2: ele.append([0,1,0]) else: ele.append([0,0,1]) data.append(ele) training = data[0:105] test = data[106:] test nn = NN(4,4,3) nn.train(training,iterations = 1000) nn.test(test) nn.weights() ###Output 输入层权重: [0.30563188105000993, 0.5002109776947306, 0.5072629022093491, -4.161579448522327, 0.5062240976711463] [2.4333773497197986, 0.767445918232545, 0.7650363609045148, -9.466223606466615, 0.7656755062053946] [-3.4197701344731426, 0.3537319954049937, 0.37773794052590776, 8.767540396971134, 0.3729717138337643] [-1.5693058163698062, 0.17852223014988272, 0.19160094047725998, 9.785742012387557, 0.18893909395518024] [0.8113573527683816, 0.5855306348573399, 0.5853289187557013, -5.174277010774653, 0.585449532277854] 隐藏层权重: [7.557036148537164, -7.634063226854593, -1.3124312035893901] [-1.0710665742224672, 1.25616303127309, -1.1224406682033232] [-1.2947035902727753, 0.9698175015299089, -1.5010449394306558] [-7.991729659252586, -6.576007298524501, 7.9080386634857796] [-1.2198325731371396, 1.0598852014664262, -1.3933700492659173]
docs/notebooks/model_selection_report.ipynb
###Markdown Imports ###Code import pandas as pd import category_encoders import numpy as np from matplotlib import pyplot as plt from scipy import interp from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline, make_pipeline from sklearn.base import TransformerMixin from sklearn.model_selection import cross_validate, StratifiedKFold from sklearn.metrics import roc_auc_score, roc_curve, auc from sklearn.neural_network import MLPClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.naive_bayes import GaussianNB from xgboost import XGBClassifier from lightgbm import LGBMClassifier from pipeline.custom_transformers import NAEncoder, ColumnDropper X_train = pd.read_csv('data/X_train.csv', na_values=['N/A or Unknown', 'unknown']) y_train = pd.read_csv('data/y_train.csv', names=['injury']) def visualize_roc_auc(X_train, y_train, classifier): plt.figure(figsize=(18,10)) cv = StratifiedKFold(n_splits=6) X, y = X_train, y_train.values.ravel() tprs = [] aucs = [] mean_fpr = np.linspace(0, 1, 100) i = 0 for train, test in cv.split(X, y): probas_ = classifier.fit(X.iloc[train], y[train]).predict_proba(X.iloc[test]) # Compute ROC curve and area the curve fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1]) tprs.append(interp(mean_fpr, fpr, tpr)) tprs[-1][0] = 0.0 roc_auc = auc(fpr, tpr) aucs.append(roc_auc) plt.plot(fpr, tpr, lw=1, alpha=0.3, label='ROC fold %d (AUC = %0.2f)' % (i, roc_auc)) i += 1 plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r', label='Luck', alpha=.8) mean_tpr = np.mean(tprs, axis=0) mean_tpr[-1] = 1.0 mean_auc = auc(mean_fpr, mean_tpr) std_auc = np.std(aucs) plt.plot(mean_fpr, mean_tpr, color='b', label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc), lw=2, alpha=.8) std_tpr = np.std(tprs, axis=0) tprs_upper = np.minimum(mean_tpr + std_tpr, 1) tprs_lower = np.maximum(mean_tpr - std_tpr, 0) plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2, label=r'$\pm$ 1 std. dev.') plt.xlim([-0.05, 1.05]) plt.ylim([-0.05, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right") plt.show() ###Output _____no_output_____ ###Markdown Baseline ###Code clf = make_pipeline(category_encoders.OneHotEncoder(), LogisticRegression()) cvx = cross_validate( clf, X_train, y_train.values.ravel(), scoring='roc_auc', n_jobs=-1, cv=15, return_train_score=False ) cvx['test_score'].mean(), cvx['test_score'].std() visualize_roc_auc(X_train, y_train, clf) ###Output _____no_output_____ ###Markdown Model selection ###Code names = ["Nearest Neighbors", "Decision Tree", "Random Forest", "Neural Net", "AdaBoost", "Naive Bayes", "XGBoost", "LightGBM", "Logistic Regression"] classifiers = [ KNeighborsClassifier(3), DecisionTreeClassifier(), RandomForestClassifier(), MLPClassifier(alpha=1), AdaBoostClassifier(), GaussianNB(), XGBClassifier(), LGBMClassifier(), LogisticRegression() ] for name, clf in zip(names, classifiers): pipeline = make_pipeline( NAEncoder(['other_person_location']), NAEncoder(['other_factor_1', 'other_factor_2', 'other_factor_3']), ColumnDropper('age_in_years'), category_encoders.OneHotEncoder(impute_missing=False), clf ) cvx = cross_validate( pipeline, X_train, y_train.values.ravel(), scoring='roc_auc', n_jobs=-1, cv=15, return_train_score=False, ) print (name, cvx['test_score'].mean(), cvx['test_score'].std()) ###Output Nearest Neighbors 0.5824872973205729 0.02764310849556343 Decision Tree 0.6085504156962913 0.013164203372976847 Random Forest 0.6078243383451253 0.011864850254015789 Neural Net 0.6034172313756204 0.013553373774203181 AdaBoost 0.6031300261125538 0.017115456680725196 Naive Bayes 0.6052936261879763 0.013991410050303427 XGBoost 0.6102930656890361 0.013019572037251978 LightGBM 0.6081408691875966 0.012830953267385732 Logistic Regression 0.6052910156524373 0.017396995877530397 ###Markdown Tuned model ###Code pipeline = make_pipeline( ColumnDropper('age_in_years'), NAEncoder(['other_person_location']), NAEncoder(['other_factor_1', 'other_factor_2', 'other_factor_3']), category_encoders.OneHotEncoder(), XGBClassifier(base_score=np.mean(y_train.values), booster='dart', colsample_bylevel=1, colsample_bytree=0.55, gamma=1, learning_rate=0.1, max_delta_step=0, max_depth=7, min_child_weight=3, missing=None, n_estimators=100, n_jobs=1, nthread=1, objective='binary:logistic', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, silent=True, subsample=1 ) ) cvx = cross_validate( pipeline, X_train, y_train.values.ravel(), scoring='roc_auc', n_jobs=-1, cv=15, return_train_score=False, ) print ("XGBoost", cvx['test_score'].mean(), cvx['test_score'].std()) visualize_roc_auc(X_train, y_train, pipeline) ###Output _____no_output_____
_build/jupyter_execute/notebooks.ipynb
###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo-wide.svg)You can also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You can also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo-wide.svg)You can also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo-wide.svg)You can also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo-wide.svg)You can also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo-wide.svg)You can also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You can also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo-wide.svg)You can also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You can also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo-wide.svg)You can also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo-wide.svg)You can also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo-wide.svg)You can also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo-wide.svg)You can also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____ ###Markdown Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_{math}$ and$$math^{blocks}$$or$$\begin{aligned}\mbox{mean} la_{tex} \\ \\math blocks\end{aligned}$$But make sure you \$Escape \$your \$dollar signs \$you want to keep! MyST markdownMyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, checkout [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/). Code blocks and outputsJupyter Book will also embed your code blocks and output in your book.For example, here's some sample Matplotlib code: ###Code from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N))) from matplotlib.lines import Line2D custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4), Line2D([0], [0], color=cmap(.5), lw=4), Line2D([0], [0], color=cmap(1.), lw=4)] fig, ax = plt.subplots(figsize=(10, 5)) lines = ax.plot(data) ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']); ###Output _____no_output_____
examples/sentence_similarity/gensen_aml_deep_dive.ipynb
###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Training GenSen on AzureML with SNLI Dataset**GenSen: Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning** [\[1\]](References) IntroductionGenSen is a technique to learn general purpose, fixed-length representations of sentences via multi-task training. The model combines the benefits of diverse sentence representation learning objectives into a single multi-task framework. As described in the paper **Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning**, it is "the first large-scale reusable sentence representation model obtained by combining a set of training objectives with the level of diversity explored here, i.e. multi-lingual NMT, natural language inference, constituency parsing and skip-thought vectors" [\[1\]](References). These representations are useful for transfer and low-resource learning. GenSen is trained on several data sources with multiple training objectives on over 100 milion sentences.GenSen yields the state-of-the-art results on multiple datasets, such as MRPC, SICK-R, SICK-E and STS, for sentence similarity. The reported results are as follows compared with other models [\[3\]](References):| Model | MRPC | SICK-R | SICK-E | STS || --- | --- | --- | --- | --- || GenSen (Subramanian et al., 2018) | 78.6/84.4 | 0.888 | 87.8 | 78.9/78.6 || [InferSent](https://arxiv.org/abs/1705.02364) (Conneau et al., 2017) | 76.2/83.1 | 0.884 | 86.3 | 75.8/75.5 || [TF-KLD](https://www.aclweb.org/anthology/D13-1090) (Ji and Eisenstein, 2013) | 80.4/85.9 | - | - | - |This notebook serves as an introduction to an end-to-end NLP solution for sentence similarity by demonstrating how to train and tune GenSen on the AzureML platform. We show the advantages of AzureML when training large NLP models with GPU.For more information on **AzureML**, see these resources:* [Quickstart notebook](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-create-workspace-with-python)* [Hyperdrive](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparameters) Background: Sequence-to-Sequence Learning![Sequence to sequence learning examples: machine translation (left) and constituent parsing (right)](https://nlpbp.blob.core.windows.net/images/seq2seq.png)**Sequence to sequence learning examples: machine translation (left) and constituent parsing (right)**The GenSen model is known to be most similar to that of Luong et al. (2015) [\[4\]](References), who train a many-to-many **sequence-to-sequence** model on a diverse set of weakly related tasks that includes machine translation, constituency parsing, image captioning, sequence autoencoding, and intra-sentence skip-thoughts. Sequence-to-sequence learning, or seq2seq, aims to directly model the conditional probability $p(x|y)$ of mapping an input sequence, $x_1,...,x_n$, into an output sequence, $y_1,...,y_m$. This is done using an encoder-decoder framework. As illustrated in the above figure, the encoder computes a representation $s$ for each input sequence, which the *decoder* uses to generate the ouput sequence. This decomposes the conditional probability as" [\[4\]](References):$$\log p(y|x)=\sum_{j=1}^{m} \log p(y_i|y_{<j}, x, s)$$It is worth noting that the GenSen model deviates from Luong's seq2seq method in two key ways. First, GenSen uses an attention mechanism, meaning that the learned vector representations are not of fixed length. Second, GenSen optimizes for improvements on the same tasks on which the model is trained, rather than optimizing for transferability to different tasks or domains. [\[1\]](References) Azure ML Compute vs. LocalWe did a comparative study to make it easier for you to choose between a GPU enabled Azure VM and Azure ML compute. The table below provides the cost vs performance trade-off for each of the choices. We can tell from the table below that with distributed training on AzureML, it will make the model converge faster and get better training loss with similar training time.* The "Azure VM" column refers to the running time of the [gensen local](gensen_local.ipynb) notebook. All the other columns refer to the current notebook.* Both the Azure VM and each Azure ML Compute node are Standard_NC6 with 1 NVIDIA Tesla K80 GPU with 12 GB GPU memory. * The total time in the table stands for the training time + setup time.* Cost is the estimated cost of running the Azure ML Compute Job or the VM up-time.**Please note:** These were the estimated cost for running these notebooks as of July 1st, 2019. Please look at the [Azure Pricing Calculator](https://azure.microsoft.com/en-us/pricing/calculator/) to see the most up to date pricing information. |---|Azure VM| AML 1 Node| AML 2 Nodes | AML 4 Nodes | AML 8 Nodes||---|---|---|---|---|---||Training Loss​|4.91​|4.81​|4.78​|4.77​|4.58​||Total Time​|1h 05m|1h 54m|1h 44m​|1h 26m​|1h 07m​||Cost|\$1.12​|\$2.71​|\$4.68​|\$7.9​|\$12.1​| Table of Contents0. [Global Settings](0-Global-Settings)1. [Data Loading and Preprocessing](1-Data-Loading-and-Preprocessing) * 1.1. [Load SNLI](1.1-Load-SNLI) * 1.2. [Tokenize](1.2-Tokenize) * 1.3. [Preprocess](1.3-Preprocess) * 1.4. [Upload to Azure Blob Storage](1.4-Upload-to-Azure-Blob-Storage) 2. [Train GenSen with Distributed Pytorch and Horovod on AzureML](2-Train-GenSen-with-Distributed-Pytorch-and-Horovod-on-AzureML) * 2.1 [Create or Attach a Remote Compute Target](2.1-Create-or-Attach-a-Remote-Compute-Target) * 2.2. [Prepare the Training Script](2.2-Prepare-the-Training-Script) * 2.3. [Define the Estimator and Experiment](2.3-Define-the-Estimator-and-Experiment) * 2.3.1 [Create a PyTorch Estimator](2.3.1-Create-a-PyTorch-Estimator) * 2.3.2 [Create the Experiment](2.3.2-Create-the-Experiment) * 2.4. [Submit the Training Job to the Compute Target](2.4-Submit-the-Training-Job-to-the-Compute-Target) * 2.4.1 [Monitor the Run](2.4.1-Monitor-the-Run) * 2.4.2 [Interpret the Training Results](2.4.2-Interpret-the-Training-Results)3. [Tune Model Hyperparameters](3-Tune-Model-Hyperparameters) * 3.1 [Start a Hyperparameter Sweep](3.1-Start-a-Hyperparameter-Sweep) * 3.2 [Monitor HyperDrive Runs](3.2-Monitor-HyperDrive-Runs) * 3.3 [Find the Best Model](3.3-Find-the-Best-Model)- [References](References) 0 Global Settings ###Code import sys import time import os import pandas as pd import shutil import papermill as pm import scrapbook as sb sys.path.append("../../") from utils_nlp.dataset import snli, preprocess, Split from utils_nlp.azureml import azureml_utils from utils_nlp.models.gensen.preprocess_utils import gensen_preprocess import azureml as aml import azureml.train.hyperdrive as hd from azureml.telemetry import set_diagnostics_collection import azureml.data from azureml.data.azure_storage_datastore import AzureFileDatastore from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException from azureml.core import Experiment, get_run from azureml.core.runconfig import MpiConfiguration from azureml.train.dnn import PyTorch from azureml.train.estimator import Estimator from azureml.train.hyperdrive import ( RandomParameterSampling, BanditPolicy, HyperDriveConfig, uniform, PrimaryMetricGoal, ) from azureml.widgets import RunDetails print("System version: {}".format(sys.version)) print("Azure ML SDK Version:", aml.core.VERSION) print("Pandas version: {}".format(pd.__version__)) # Model configuration NROWS = None CACHE_DIR = "./temp" AZUREML_CONFIG_PATH = "./.azureml" AZUREML_VERBOSE = False # Prints verbose azureml logs when True MAX_EPOCH = 2 # by default is None ENTRY_SCRIPT = "utils_nlp/gensen/gensen_train.py" TRAIN_SCRIPT = "gensen_train.py" CONFIG_PATH = "gensen_config.json" EXPERIMENT_NAME = "NLP-SS-GenSen-deepdive" UTIL_NLP_PATH = "../../utils_nlp" MAX_TOTAL_RUNS = 8 MAX_CONCURRENT_RUNS = 4 ###Output _____no_output_____ ###Markdown In this notebook we use the Azure Machine Learning Python SDK to facilitate remote training and computation. To get started, we must first initialize an AzureML [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureworkspace), a centralized resource for managing experiment runs, compute resources, datastores, and other machine learning artifacts on the cloud. The following cell looks to set up the connection to your [Azure Machine Learning service Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureworkspace). You can choose to connect to an existing workspace or create a new one. **To access an existing workspace:**1. If you have a `config.json` file, you do not need to provide the workspace information; you will only need to update the `config_path` variable that is defined above which contains the file.2. Otherwise, you will need to supply the following: * The name of your workspace * Your subscription id * The resource group name**To create a new workspace:**Set the following information:* A name for your workspace* Your subscription id* The resource group name* [Azure region](https://azure.microsoft.com/en-us/global-infrastructure/regions/) to create the workspace in, such as `eastus2`. ###Code if os.path.exists(AZUREML_CONFIG_PATH): ws = azureml_utils.get_or_create_workspace(config_path=AZUREML_CONFIG_PATH) else: ws = azureml_utils.get_or_create_workspace( config_path=AZUREML_CONFIG_PATH, subscription_id="<SUBSCRIPTION_ID>", resource_group="<RESOURCE_GROUP>", workspace_name="<WORKSPACE_NAME>", workspace_region="<WORKSPACE_REGION>", ) if AZUREML_VERBOSE: print("Workspace name: {}".format(ws.name)) print("Azure region: {}".format(ws.location)) print("Subscription id: {}".format(ws.subscription_id)) print("Resource group: {}".format(ws.resource_group)) ###Output _____no_output_____ ###Markdown Opt-in diagnostics for better experience, quality, and security of future releases. ###Code set_diagnostics_collection(send_diagnostics=True) ###Output Turning diagnostics collection on. ###Markdown 1 Data Loading and Preprocessing We use the [SNLI](https://nlp.stanford.edu/projects/snli/) dataset in this example.Note: The dataset used in the original paper can be downloaded by running the bashfile [here](https://github.com/Maluuba/gensen/blob/master/get_data.sh). Training on the original datasets will reproduce the results in the [paper](https://arxiv.org/abs/1804.00079), but **will take about 20 hours of training time**. For the purposes of this example we use SNLI, a subset of the original dataset, as the only training dataset. 1.1 Load SNLI ###Code data_dir = os.path.join(CACHE_DIR, "data") train = snli.load_pandas_df(data_dir, file_split=Split.TRAIN, nrows=NROWS) dev = snli.load_pandas_df(data_dir, file_split=Split.DEV, nrows=NROWS) test = snli.load_pandas_df(data_dir, file_split=Split.TEST, nrows=NROWS) train.head() ###Output _____no_output_____ ###Markdown 1.2 Tokenize Here we clean the dataframes, do lowercase standardization, and tokenize the text using the [NLTK](https://www.nltk.org/) library. ###Code def clean_and_tokenize(df): df = snli.clean_cols(df) df = snli.clean_rows(df) df = preprocess.to_lowercase(df) df = preprocess.to_nltk_tokens(df) return df ###Output _____no_output_____ ###Markdown For `clean_and_tokenize` function, it may take a little bit longer. To run the following cell, it takes around 5 to 10 mins. ###Code train = clean_and_tokenize(train) dev = clean_and_tokenize(dev) test = clean_and_tokenize(test) train.head() ###Output _____no_output_____ ###Markdown 1.3 PreprocessWe format our data in a specific way in order for the Gensen model to be able to ingest it. We do this by* Saving the tokens for each split in a `snli_1.0_{split}.txt.clean` file, with the sentence pairs and scores tab-separated and the tokens separated by a single space. Since some of the samples have invalid scores ("-"), we filter those out and save them separately in a `snli_1.0_{split}.txt.clean.noblank` file.* Saving the tokenized sentence and labels separately, in the form `snli_1.0_{split}.txt.s1.tok` or `snli_1.0_{split}.txt.s2.tok` or `snli_1.0_{split}.txt.lab`. ###Code preprocessed_data_dir = gensen_preprocess(train, dev, test, data_dir) print("Writing input data to {}".format(preprocessed_data_dir)) ###Output Writing input data to ./temp\data\clean/snli_1.0 ###Markdown 1.4 Upload to Azure Blob StorageWe upload the data from the local machine into the datastore so that it can be accessed for remote training. The datastore is a reference that points to a storage account, e.g. the Azure Blob Storage service. It can be attached to an AzureML workspace to facilitate data management operations such as uploading/downloading data or interacting with data from remote compute targets.**Note: If you already have the preprocessed files under `clean/snli_1.0/` in your default datastore, you DO NOT need to redo this section.** ###Code ds = ws.get_default_datastore() if AZUREML_VERBOSE: print("Datastore type: {}".format(ds.datastore_type)) print("Datastore account: {}".format(ds.account_name)) print("Datastore container: {}".format(ds.container_name)) print("Data reference: {}".format(ds.as_mount())) _ = ds.upload( src_dir=os.path.join(data_dir, "clean/snli_1.0"), overwrite=False, show_progress=AZUREML_VERBOSE, ) ###Output _____no_output_____ ###Markdown 2 Train GenSen with Distributed Pytorch and Horovod on AzureMLIn this tutorial, we train a GenSen model with PyTorch on AML using distributed training across a GPU cluster.After creating the workspace and setting up the development environment, training a model in Azure Machine Learning involves the following steps:1. Creating a remote compute target2. Preparing the training data and uploading it to datastore (Note that this was done in Section 1.4)3. Preparing the training script4. Creating Estimator and Experiment objects5. Submitting the Estimator to an Experiment attached to the AzureML workspace 2.1 Create or Attach a Remote Compute TargetWe create and attach a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecturecompute-target) for training the model. Here we use the AzureML-managed compute target ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute)) as our remote training compute resource. Our cluster autoscales from 0 to 2 `STANDARD_NC6` GPU nodes.Creating and configuring the AmlCompute cluster takes approximately 5 minutes the first time around. Once a cluster with the given configuration is created, it does not need to be created again.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Read more about the default limits and how to request more quota [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas). ###Code cluster_name = "gensen-aml" try: compute_target = ComputeTarget(workspace=ws, name=cluster_name) print("Found existing compute target {}".format(cluster_name)) except ComputeTargetException: print("Creating a new compute target {}...".format(cluster_name)) compute_config = AmlCompute.provisioning_configuration( vm_size="STANDARD_NC6", max_nodes=8 ) # create the cluster compute_target = ComputeTarget.create(ws, cluster_name, compute_config) compute_target.wait_for_completion(show_output=AZUREML_VERBOSE) if AZUREML_VERBOSE: print(compute_target.get_status().serialize()) ###Output Found existing compute target gensen-aml ###Markdown 2.2 Prepare the Training ScriptThe training process involves the following steps:1. Create or load the dataset vocabulary2. Train on the training dataset for each batch epoch (batch size = 48 updates)3. Evaluate on the validation dataset for every 10 epochs4. Find the local minimum point on validation loss5. Save the best model and stop the training processIn this section, we define the training script and move all necessary dependencies to `project_folder`, which will eventually be submitted to the remote compute target. Note that the size of the folder can not exceed 300Mb, so large dependencies such as pre-trained embeddings must be accessed from the datastore. ###Code project_folder = os.path.join(CACHE_DIR, "gensen") os.makedirs(project_folder, exist_ok=True) ###Output _____no_output_____ ###Markdown The script for distributed GenSen training is provided at `./gensen_train.py`.In this example, we use MLflow to log metrics. We also use the [AzureML-Mlflow](https://pypi.org/project/azureml-mlflow/) package to persist these metrics to the AzureML workspace. This is done with no change to the provided training script! Note that logging is done for loss *per minibatch*. Copy the training script `gensen_train.py` and config file `gensen_config.json` into the project folder. ###Code utils_folder = os.path.join(project_folder, "utils_nlp") _ = shutil.copytree(UTIL_NLP_PATH, utils_folder) _ = shutil.copy(TRAIN_SCRIPT, os.path.join(utils_folder, "gensen")) _ = shutil.copy(CONFIG_PATH, os.path.join(utils_folder, "gensen")) ###Output _____no_output_____ ###Markdown 2.3 Define the Estimator and Experiment 2.3.1 Create a PyTorch EstimatorThe Azure ML SDK's PyTorch Estimator allows us to submit PyTorch training jobs for both single-node and distributed runs. For more information on the PyTorch estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-pytorch).Note that `gensen_config.json` defines all the hyperparameters and paths when training GenSen model. The trained model will be saved in `models` to Azure Blob Storage. **Remember to clean the `models` folder in order to save new models.** ###Code if MAX_EPOCH: script_params = { "--config": "utils_nlp/gensen/gensen_config.json", "--data_folder": ws.get_default_datastore().as_mount(), "--max_epoch": MAX_EPOCH, } else: script_params = { "--config": "utils_nlp/gensen/gensen_config.json", "--data_folder": ws.get_default_datastore().as_mount(), } estimator = PyTorch( source_directory=project_folder, script_params=script_params, compute_target=compute_target, entry_script=ENTRY_SCRIPT, node_count=2, process_count_per_node=1, distributed_training=MpiConfiguration(), use_gpu=True, framework_version="1.1", conda_packages=["scikit-learn=0.20.3", "h5py", "nltk"], pip_packages=["azureml-mlflow>=1.0.43.1", "numpy>=1.16.0"], ) ###Output _____no_output_____ ###Markdown This Estimator specifies that the training script will run on `2` nodes, with one worker per node. In order to execute a distributed run using GPU, we must define `use_gpu` and `distributed_backend` to use MPI/Horovod. PyTorch, Horovod, and other necessary dependencies are installed automatically. If the training script makes use of packages that are not already defined in `.azureml/conda_dependencies.yml`, we must explicitly tell the estimator to install them via the constructor's `pip_packages` or `conda_packages` parameters.Note that if the estimator is being created for the first time, this step will take longer to run because the conda dependencies found under `.azureml/conda_dependencies.yml` must be installed from scratch. After the first run, it will use the existing conda environment and run the code directly. The training time will take around **2 hours** if you use the default value `max_epoch=None`, which means the training will stop if the local minimum loss has been found. User can specify the number of epochs for training.**Requirements:**- python=3.6.2- numpy=1.15.1- numpy-base=1.15.1- pip=10.0.1- python=3.6.6- python-dateutil=2.7.3- scikit-learn=0.20.3- azureml-defaults- h5py- nltk 2.3.2 Create the ExperimentCreate an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureexperiment) to track all the runs in the AzureML workspace for this tutorial. ###Code experiment_name = EXPERIMENT_NAME experiment = Experiment(ws, name=experiment_name) ###Output _____no_output_____ ###Markdown 2.4 Submit the Training Job to the Compute TargetWe can run the experiment by simply submitting the Estimator object to the compute target. Note that this call is asynchronous. ###Code run = experiment.submit(estimator) if AZUREML_VERBOSE: print(run) ###Output _____no_output_____ ###Markdown 2.4.1 Monitor the RunWe can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes. The widget automatically plots and visualizes the loss metric that we logged to the AzureML workspace. ###Code RunDetails(run).show() _ = run.wait_for_completion(show_output=AZUREML_VERBOSE) # Block until the script has completed training. ###Output _____no_output_____ ###Markdown 2.4.2 Interpret the Training ResultsThe following chart shows the model validation loss with different node configurations on AmlCompute. We find that the minimum validation loss decreases as the number of nodes increases; that is, the performance scales with the number of nodes in the cluster.| Standard_NC6 | AML_1node | AML_2nodes | AML_4nodes | AML_8nodes || --- | --- | --- | --- | --- || min_val_loss | 4.81 | 4.78 | 4.77 | 4.58 |We also observe common tradeoffs associated with distributed training. We make use of [Horovod](https://github.com/horovod/horovod), a distributed training tool for many popular deep learning frameworks that enables parallelization of work across the nodes in the cluster. Distributed training decreases the time it takes for the model to converge in theory, but the model may also take more time in communicating with each node. Note that the communication time will eventually become negligible when training on larger and larger datasets, but being aware of this tradeoff is helpful for choosing the node configuration when training on smaller datasets. 3 Tune Model HyperparametersNow that we've seen how to do a simple PyTorch training run using the SDK, let's see if we can further improve the accuracy of our model. We can optimize our model's hyperparameters using Azure Machine Learning's hyperparameter tuning capabilities. 3.1 Start a Hyperparameter SweepFirst, we define the hyperparameter space to sweep over. Since the training script uses a learning rate schedule to decay the learning rate every several epochs, we can tune the initial learning rate parameter. In this example we will use random sampling to try different configuration sets of hyperparameters to minimize our primary metric, the best validation loss.Then, we specify the early termination policy to use to early terminate poorly performing runs. Here we use the `BanditPolicy`, which terminates any run that doesn't fall within the slack factor of our primary evaluation metric. In this tutorial, we will apply this policy every epoch (since we report our the validation loss metric every epoch and `evaluation_interval=1`). Note that we explicitly define `delay_evaluation` such that the first policy evaluation does not occur until after the 10th epoch.Refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-tune-hyperparametersspecify-an-early-termination-policy) for more information on the BanditPolicy and other policies available. ###Code param_sampling = RandomParameterSampling({"learning_rate": uniform(0.0001, 0.001)}) early_termination_policy = BanditPolicy( slack_factor=0.15, evaluation_interval=1, delay_evaluation=10 ) hyperdrive_config = HyperDriveConfig( estimator=estimator, hyperparameter_sampling=param_sampling, policy=early_termination_policy, primary_metric_name="min_val_loss", primary_metric_goal=PrimaryMetricGoal.MINIMIZE, max_total_runs=MAX_TOTAL_RUNS, max_concurrent_runs=MAX_CONCURRENT_RUNS, ) ###Output _____no_output_____ ###Markdown Finally, lauch the hyperparameter tuning job. ###Code hyperdrive_run = experiment.submit(hyperdrive_config) # Start the HyperDrive run ###Output _____no_output_____ ###Markdown 3.2 Monitor HyperDrive RunsWe can monitor the progress of the runs with a Jupyter widget, or again block until the run has completed. ###Code RunDetails(hyperdrive_run).show() _ = hyperdrive_run.wait_for_completion(show_output=AZUREML_VERBOSE) # Block until complete ###Output _____no_output_____ ###Markdown 3.2.1 Interpret the Tuning ResultsThe chart below shows 4 different threads running in parallel with different learning rates. The number of total runs is 8. We pick the best learning rate by minimizing the validation loss. The HyperDrive run automatically shows the tracking charts (example in the following) to facilitate visualization of the tuning process.![Tuning](https://nlpbp.blob.core.windows.net/images/gensen_tune1.PNG)![Tuning](https://nlpbp.blob.core.windows.net/images/gensen_tune2.PNG)**From the results in section [2.3.5 Monitor your run](2.4.1-Monitor-your-run), the best validation loss for 1 node is 4.81, but with tuning we can easily achieve better performance around 4.65.** 3.3 Find the Best Model Once all the runs complete, we can find the run that produced the model with the lowest loss. ###Code best_run = hyperdrive_run.get_best_run_by_primary_metric() best_run_metrics = best_run.get_metrics() print( "Best Run:\n Validation loss: {0:.5f} \n Learning rate: {1:.5f} \n".format( best_run_metrics["min_val_loss"], best_run_metrics["learning_rate"] ) ) # Persist properties of the run so we can access the logged metrics later sb.glue("min_val_loss", best_run_metrics['min_val_loss']) sb.glue("learning_rate", best_run_metrics['learning_rate']) ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Training GenSen on AzureML with SNLI Dataset**GenSen: Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning** [\[1\]](References) IntroductionGenSen is a technique to learn general purpose, fixed-length representations of sentences via multi-task training. The model combines the benefits of diverse sentence representation learning objectives into a single multi-task framework. As described in the paper **Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning**, it is "the first large-scale reusable sentence representation model obtained by combining a set of training objectives with the level of diversity explored here, i.e. multi-lingual NMT, natural language inference, constituency parsing and skip-thought vectors" [\[1\]](References). These representations are useful for transfer and low-resource learning. GenSen is trained on several data sources with multiple training objectives on over 100 milion sentences.GenSen yields the state-of-the-art results on multiple datasets, such as MRPC, SICK-R, SICK-E and STS, for sentence similarity. The reported results are as follows compared with other models [\[3\]](References):| Model | MRPC | SICK-R | SICK-E | STS || --- | --- | --- | --- | --- || GenSen (Subramanian et al., 2018) | 78.6/84.4 | 0.888 | 87.8 | 78.9/78.6 || [InferSent](https://arxiv.org/abs/1705.02364) (Conneau et al., 2017) | 76.2/83.1 | 0.884 | 86.3 | 75.8/75.5 || [TF-KLD](https://www.aclweb.org/anthology/D13-1090) (Ji and Eisenstein, 2013) | 80.4/85.9 | - | - | - |This notebook serves as an introduction to an end-to-end NLP solution for sentence similarity by demonstrating how to train and tune GenSen on the AzureML platform. We show the advantages of AzureML when training large NLP models with GPU.For more information on **AzureML**, see these resources:* [Quickstart notebook](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-create-workspace-with-python)* [Hyperdrive](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparameters) Background: Sequence-to-Sequence Learning![Sequence to sequence learning examples: machine translation (left) and constituent parsing (right)](https://nlpbp.blob.core.windows.net/images/seq2seq.png)**Sequence to sequence learning examples: machine translation (left) and constituent parsing (right)**The GenSen model is known to be most similar to that of Luong et al. (2015) [\[4\]](References), who train a many-to-many **sequence-to-sequence** model on a diverse set of weakly related tasks that includes machine translation, constituency parsing, image captioning, sequence autoencoding, and intra-sentence skip-thoughts. Sequence-to-sequence learning, or seq2seq, aims to directly model the conditional probability $p(x|y)$ of mapping an input sequence, $x_1,...,x_n$, into an output sequence, $y_1,...,y_m$. This is done using an encoder-decoder framework. As illustrated in the above figure, the encoder computes a representation $s$ for each input sequence, which the *decoder* uses to generate the ouput sequence. This decomposes the conditional probability as" [\[4\]](References):$$\log p(y|x)=\sum_{j=1}^{m} \log p(y_i|y_{<j}, x, s)$$It is worth noting that the GenSen model deviates from Luong's seq2seq method in two key ways. First, GenSen uses an attention mechanism, meaning that the learned vector representations are not of fixed length. Second, GenSen optimizes for improvements on the same tasks on which the model is trained, rather than optimizing for transferability to different tasks or domains. [\[1\]](References) Azure ML Compute vs. LocalWe did a comparative study to make it easier for you to choose between a GPU enabled Azure VM and Azure ML compute. The table below provides the cost vs performance trade-off for each of the choices. We can tell from the table below that with distributed training on AzureML, it will make the model converge faster and get better training loss with similar training time.* The total time in the table stands for the training time + setup time.* Cost is the estimated cost of running the Azure ML Compute Job or the VM up-time.**Please note:** These were the estimated cost for running these notebooks as of July 1. Please look at the [Azure Pricing Calculator](https://azure.microsoft.com/en-us/pricing/calculator/) to see the most up to date pricing information. |---|Azure VM| AML 1 Node| AML 2 Nodes | AML 4 Nodes | AML 8 Nodes||---|---|---|---|---|---||Training Loss​|4.91​|4.81​|4.78​|4.77​|4.58​||Total Time​|1h 05m|1h 54m|1h 44m​|1h 26m​|1h 07m​||Cost|\$1.12​|\$2.71​|\$4.68​|\$7.9​|\$12.1​| Table of Contents0. [Global Settings](0-Global-Settings)1. [Data Loading and Preprocessing](1-Data-Loading-and-Preprocessing) * 1.1. [Load SNLI](1.1-Load-SNLI) * 1.2. [Tokenize](1.2-Tokenize) * 1.3. [Preprocess](1.3-Preprocess) * 1.4. [Upload to Azure Blob Storage](1.4-Upload-to-Azure-Blob-Storage) 2. [Train GenSen with Distributed Pytorch and Horovod on AzureML](2-Train-GenSen-with-Distributed-Pytorch-and-Horovod-on-AzureML) * 2.1 [Create or Attach a Remote Compute Target](2.1-Create-or-Attach-a-Remote-Compute-Target) * 2.2. [Prepare the Training Script](2.2-Prepare-the-Training-Script) * 2.3. [Define the Estimator and Experiment](2.3-Define-the-Estimator-and-Experiment) * 2.3.1 [Create a PyTorch Estimator](2.3.1-Create-a-PyTorch-Estimator) * 2.3.2 [Create the Experiment](2.3.2-Create-the-Experiment) * 2.4. [Submit the Training Job to the Compute Target](2.4-Submit-the-Training-Job-to-the-Compute-Target) * 2.4.1 [Monitor the Run](2.4.1-Monitor-the-Run) * 2.4.2 [Interpret the Training Results](2.4.2-Interpret-the-Training-Results)3. [Tune Model Hyperparameters](3-Tune-Model-Hyperparameters) * 3.1 [Start a Hyperparameter Sweep](3.1-Start-a-Hyperparameter-Sweep) * 3.2 [Monitor HyperDrive Runs](3.2-Monitor-HyperDrive-Runs) * 3.3 [Find the Best Model](3.3-Find-the-Best-Model)- [References](References) 0 Global Settings ###Code import sys import time import os import pandas as pd import shutil import papermill as pm import scrapbook as sb sys.path.append("../../") from utils_nlp.dataset import snli, preprocess, Split from utils_nlp.azureml import azureml_utils from utils_nlp.models.gensen.preprocess_utils import gensen_preprocess import azureml as aml import azureml.train.hyperdrive as hd from azureml.telemetry import set_diagnostics_collection import azureml.data from azureml.data.azure_storage_datastore import AzureFileDatastore from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException from azureml.core import Experiment, get_run from azureml.core.runconfig import MpiConfiguration from azureml.train.dnn import PyTorch from azureml.train.estimator import Estimator from azureml.train.hyperdrive import ( RandomParameterSampling, BanditPolicy, HyperDriveConfig, uniform, PrimaryMetricGoal, ) from azureml.widgets import RunDetails print("System version: {}".format(sys.version)) print("Azure ML SDK Version:", aml.core.VERSION) print("Pandas version: {}".format(pd.__version__)) # Model configuration NROWS = None CACHE_DIR = "./temp" MAX_EPOCH = 2 # by default is None ENTRY_SCRIPT = "utils_nlp/gensen/gensen_train.py" TRAIN_SCRIPT = "gensen_train.py" CONFIG_PATH = "gensen_config.json" EXPERIMENT_NAME = "NLP-SS-GenSen-deepdive" UTIL_NLP_PATH = "../../utils_nlp" MAX_TOTAL_RUNS = 8 MAX_CONCURRENT_RUNS = 4 # Azure resources subscription_id = "YOUR_SUBSCRIPTION_ID" resource_group = "YOUR_RESOURCE_GROUP_NAME" workspace_name = "YOUR_WORKSPACE_NAME" workspace_region = "YOUR_WORKSPACE_REGION" #Possible values eastus, eastus2 and so on. AZUREML_CONFIG_PATH = "./.azureml" AZUREML_VERBOSE = False ###Output _____no_output_____ ###Markdown In this notebook we use the Azure Machine Learning Python SDK to facilitate remote training and computation. To get started, we must first initialize an AzureML [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureworkspace), a centralized resource for managing experiment runs, compute resources, datastores, and other machine learning artifacts on the cloud. Refer to the official [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) notebook for more information about setting up the workspace. ###Code if os.path.exists(AZUREML_CONFIG_PATH): ws = azureml_utils.get_or_create_workspace(config_path=AZUREML_CONFIG_PATH) else: ws = azureml_utils.get_or_create_workspace( subscription_id=subscription_id, resource_group=resource_group, workspace_name=workspace_name, workspace_region=workspace_region, ) if AZUREML_VERBOSE: print("Workspace name: {}".format(ws.name)) print("Azure region: {}".format(ws.location)) print("Subscription id: {}".format(ws.subscription_id)) print("Resource group: {}".format(ws.resource_group)) ###Output _____no_output_____ ###Markdown Opt-in diagnostics for better experience, quality, and security of future releases. ###Code set_diagnostics_collection(send_diagnostics=True) ###Output Turning diagnostics collection on. ###Markdown 1 Data Loading and Preprocessing We use the [SNLI](https://nlp.stanford.edu/projects/snli/) dataset in this example.Note: The dataset used in the original paper can be downloaded by running the bashfile [here](https://github.com/Maluuba/gensen/blob/master/get_data.sh). Training on the original datasets will reproduce the results in the [paper](https://arxiv.org/abs/1804.00079), but **will take about 20 hours of training time**. For the purposes of this example we use SNLI, a subset of the original dataset, as the only training dataset. 1.1 Load SNLI ###Code data_dir = os.path.join(CACHE_DIR, "data") train = snli.load_pandas_df(data_dir, file_split=Split.TRAIN, nrows=NROWS) dev = snli.load_pandas_df(data_dir, file_split=Split.DEV, nrows=NROWS) test = snli.load_pandas_df(data_dir, file_split=Split.TEST, nrows=NROWS) train.head() ###Output _____no_output_____ ###Markdown 1.2 Tokenize Here we clean the dataframes, do lowercase standardization, and tokenize the text using the [NLTK](https://www.nltk.org/) library. ###Code def clean_and_tokenize(df): df = snli.clean_cols(df) df = snli.clean_rows(df) df = preprocess.to_lowercase(df) df = preprocess.to_nltk_tokens(df) return df ###Output _____no_output_____ ###Markdown For `clean_and_tokenize` function, it may take a little bit longer. To run the following cell, it takes around 5 to 10 mins. ###Code train = clean_and_tokenize(train) dev = clean_and_tokenize(dev) test = clean_and_tokenize(test) train.head() ###Output _____no_output_____ ###Markdown 1.3 PreprocessWe format our data in a specific way in order for the Gensen model to be able to ingest it. We do this by* Saving the tokens for each split in a `snli_1.0_{split}.txt.clean` file, with the sentence pairs and scores tab-separated and the tokens separated by a single space. Since some of the samples have invalid scores ("-"), we filter those out and save them separately in a `snli_1.0_{split}.txt.clean.noblank` file.* Saving the tokenized sentence and labels separately, in the form `snli_1.0_{split}.txt.s1.tok` or `snli_1.0_{split}.txt.s2.tok` or `snli_1.0_{split}.txt.lab`. ###Code preprocessed_data_dir = gensen_preprocess(train, dev, test, data_dir) print("Writing input data to {}".format(preprocessed_data_dir)) ###Output Writing input data to ./temp\data\clean/snli_1.0 ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Training GenSen on AzureML with SNLI Dataset**GenSen: Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning** [\[1\]](References) IntroductionGenSen is a technique to learn general purpose, fixed-length representations of sentences via multi-task training. The model combines the benefits of diverse sentence representation learning objectives into a single multi-task framework. As described in the paper **Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning**, it is "the first large-scale reusable sentence representation model obtained by combining a set of training objectives with the level of diversity explored here, i.e. multi-lingual NMT, natural language inference, constituency parsing and skip-thought vectors" [\[1\]](References). These representations are useful for transfer and low-resource learning. GenSen is trained on several data sources with multiple training objectives on over 100 milion sentences.GenSen yields the state-of-the-art results on multiple datasets, such as MRPC, SICK-R, SICK-E and STS, for sentence similarity. The reported results are as follows compared with other models [\[3\]](References):| Model | MRPC | SICK-R | SICK-E | STS || --- | --- | --- | --- | --- || GenSen (Subramanian et al., 2018) | 78.6/84.4 | 0.888 | 87.8 | 78.9/78.6 || [InferSent](https://arxiv.org/abs/1705.02364) (Conneau et al., 2017) | 76.2/83.1 | 0.884 | 86.3 | 75.8/75.5 || [TF-KLD](https://www.aclweb.org/anthology/D13-1090) (Ji and Eisenstein, 2013) | 80.4/85.9 | - | - | - |This notebook serves as an introduction to an end-to-end NLP solution for sentence similarity by demonstrating how to train and tune GenSen on the AzureML platform. We show the advantages of AzureML when training large NLP models with GPU.For more information on **AzureML**, see these resources:* [Quickstart notebook](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-create-workspace-with-python)* [Hyperdrive](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparameters) Background: Sequence-to-Sequence Learning![Sequence to sequence learning examples: machine translation (left) and constituent parsing (right)](https://nlpbp.blob.core.windows.net/images/seq2seq.png)**Sequence to sequence learning examples: machine translation (left) and constituent parsing (right)**The GenSen model is known to be most similar to that of Luong et al. (2015) [\[4\]](References), who train a many-to-many **sequence-to-sequence** model on a diverse set of weakly related tasks that includes machine translation, constituency parsing, image captioning, sequence autoencoding, and intra-sentence skip-thoughts. Sequence-to-sequence learning, or seq2seq, aims to directly model the conditional probability $p(x|y)$ of mapping an input sequence, $x_1,...,x_n$, into an output sequence, $y_1,...,y_m$. This is done using an encoder-decoder framework. As illustrated in the above figure, the encoder computes a representation $s$ for each input sequence, which the *decoder* uses to generate the ouput sequence. This decomposes the conditional probability as" [\[4\]](References):$$\log p(y|x)=\sum_{j=1}^{m} \log p(y_i|y_{<j}, x, s)$$It is worth noting that the GenSen model deviates from Luong's seq2seq method in two key ways. First, GenSen uses an attention mechanism, meaning that the learned vector representations are not of fixed length. Second, GenSen optimizes for improvements on the same tasks on which the model is trained, rather than optimizing for transferability to different tasks or domains. [\[1\]](References) Azure ML Compute vs. LocalWe did a comparative study to make it easier for you to choose between a GPU enabled Azure VM and Azure ML compute. The table below provides the cost vs performance trade-off for each of the choices. We can tell from the table below that with distributed training on AzureML, it will make the model converge faster and get better training loss with similar training time.* The "Azure VM" column refers to the running time of the [gensen local](gensen_local.ipynb) notebook. All the other columns refer to the current notebook.* Both the Azure VM and each Azure ML Compute node are Standard_NC6 with 1 NVIDIA Tesla K80 GPU with 12 GB GPU memory. * The total time in the table stands for the training time + setup time.* Cost is the estimated cost of running the Azure ML Compute Job or the VM up-time.**Please note:** These were the estimated cost for running these notebooks as of July 1st, 2019. Please look at the [Azure Pricing Calculator](https://azure.microsoft.com/en-us/pricing/calculator/) to see the most up to date pricing information. |---|Azure VM| AML 1 Node| AML 2 Nodes | AML 4 Nodes | AML 8 Nodes||---|---|---|---|---|---||Training Loss​|4.91​|4.81​|4.78​|4.77​|4.58​||Total Time​|1h 05m|1h 54m|1h 44m​|1h 26m​|1h 07m​||Cost|\$1.12​|\$2.71​|\$4.68​|\$7.9​|\$12.1​| Table of Contents0. [Global Settings](0-Global-Settings)1. [Data Loading and Preprocessing](1-Data-Loading-and-Preprocessing) * 1.1. [Load SNLI](1.1-Load-SNLI) * 1.2. [Tokenize](1.2-Tokenize) * 1.3. [Preprocess](1.3-Preprocess) * 1.4. [Upload to Azure Blob Storage](1.4-Upload-to-Azure-Blob-Storage) 2. [Train GenSen with Distributed Pytorch and Horovod on AzureML](2-Train-GenSen-with-Distributed-Pytorch-and-Horovod-on-AzureML) * 2.1 [Create or Attach a Remote Compute Target](2.1-Create-or-Attach-a-Remote-Compute-Target) * 2.2. [Prepare the Training Script](2.2-Prepare-the-Training-Script) * 2.3. [Define the Estimator and Experiment](2.3-Define-the-Estimator-and-Experiment) * 2.3.1 [Create a PyTorch Estimator](2.3.1-Create-a-PyTorch-Estimator) * 2.3.2 [Create the Experiment](2.3.2-Create-the-Experiment) * 2.4. [Submit the Training Job to the Compute Target](2.4-Submit-the-Training-Job-to-the-Compute-Target) * 2.4.1 [Monitor the Run](2.4.1-Monitor-the-Run) * 2.4.2 [Interpret the Training Results](2.4.2-Interpret-the-Training-Results)3. [Tune Model Hyperparameters](3-Tune-Model-Hyperparameters) * 3.1 [Start a Hyperparameter Sweep](3.1-Start-a-Hyperparameter-Sweep) * 3.2 [Monitor HyperDrive Runs](3.2-Monitor-HyperDrive-Runs) * 3.3 [Find the Best Model](3.3-Find-the-Best-Model)- [References](References) 0 Global Settings ###Code import sys import time import os import pandas as pd import shutil import papermill as pm import scrapbook as sb sys.path.append("../../") from utils_nlp.dataset import snli, preprocess, Split from utils_nlp.azureml import azureml_utils from utils_nlp.models.gensen.preprocess_utils import gensen_preprocess import azureml as aml import azureml.train.hyperdrive as hd from azureml.telemetry import set_diagnostics_collection import azureml.data from azureml.data.azure_storage_datastore import AzureFileDatastore from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException from azureml.core import Experiment, get_run from azureml.core.runconfig import MpiConfiguration from azureml.train.dnn import PyTorch from azureml.train.estimator import Estimator from azureml.train.hyperdrive import ( RandomParameterSampling, BanditPolicy, HyperDriveConfig, uniform, PrimaryMetricGoal, ) from azureml.widgets import RunDetails print("System version: {}".format(sys.version)) print("Azure ML SDK Version:", aml.core.VERSION) print("Pandas version: {}".format(pd.__version__)) # Model configuration NROWS = None CACHE_DIR = "./temp" AZUREML_CONFIG_PATH = "./.azureml" AZUREML_VERBOSE = False # Prints verbose azureml logs when True MAX_EPOCH = 2 # by default is None TRAIN_SCRIPT = "gensen_train.py" CONFIG_PATH = "gensen_config.json" EXPERIMENT_NAME = "NLP-SS-GenSen-deepdive" UTIL_NLP_PATH = "../../utils_nlp" MAX_TOTAL_RUNS = 8 MAX_CONCURRENT_RUNS = 4 ###Output _____no_output_____ ###Markdown In this notebook we use the Azure Machine Learning Python SDK to facilitate remote training and computation. To get started, we must first initialize an AzureML [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureworkspace), a centralized resource for managing experiment runs, compute resources, datastores, and other machine learning artifacts on the cloud. The following cell looks to set up the connection to your [Azure Machine Learning service Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureworkspace). You can choose to connect to an existing workspace or create a new one. **To access an existing workspace:**1. If you have a `config.json` file, you do not need to provide the workspace information; you will only need to update the `config_path` variable that is defined above which contains the file.2. Otherwise, you will need to supply the following: * The name of your workspace * Your subscription id * The resource group name**To create a new workspace:**Set the following information:* A name for your workspace* Your subscription id* The resource group name* [Azure region](https://azure.microsoft.com/en-us/global-infrastructure/regions/) to create the workspace in, such as `eastus2`. ###Code # Azure resources subscription_id = "YOUR_SUBSCRIPTION_ID" resource_group = "YOUR_RESOURCE_GROUP_NAME" workspace_name = "YOUR_WORKSPACE_NAME" workspace_region = "YOUR_WORKSPACE_REGION" #Possible values eastus, eastus2 and so on. ws = azureml_utils.get_or_create_workspace( config_path=AZUREML_CONFIG_PATH, subscription_id=subscription_id, resource_group=resource_group, workspace_name=workspace_name, workspace_region=workspace_region, ) print( "Workspace name: " + ws.name, "Azure region: " + ws.location, "Subscription id: " + ws.subscription_id, "Resource group: " + ws.resource_group, sep="\n", ) ###Output _____no_output_____ ###Markdown Opt-in diagnostics for better experience, quality, and security of future releases. ###Code set_diagnostics_collection(send_diagnostics=True) ###Output Turning diagnostics collection on. ###Markdown 1 Data Loading and Preprocessing We use the [SNLI](https://nlp.stanford.edu/projects/snli/) dataset in this example.Note: The dataset used in the original paper can be downloaded by running the bashfile [here](https://github.com/Maluuba/gensen/blob/master/get_data.sh). Training on the original datasets will reproduce the results in the [paper](https://arxiv.org/abs/1804.00079), but **will take about 20 hours of training time**. For the purposes of this example we use SNLI, a subset of the original dataset, as the only training dataset. 1.1 Load SNLI ###Code data_dir = os.path.join(CACHE_DIR, "data") train = snli.load_pandas_df(data_dir, file_split=Split.TRAIN, nrows=NROWS) dev = snli.load_pandas_df(data_dir, file_split=Split.DEV, nrows=NROWS) test = snli.load_pandas_df(data_dir, file_split=Split.TEST, nrows=NROWS) train.head() ###Output _____no_output_____ ###Markdown 1.2 Tokenize Here we clean the dataframes, do lowercase standardization, and tokenize the text using the [NLTK](https://www.nltk.org/) library. ###Code def clean_and_tokenize(df): df = snli.clean_cols(df) df = snli.clean_rows(df) df = preprocess.to_lowercase(df) df = preprocess.to_nltk_tokens(df) return df ###Output _____no_output_____ ###Markdown For `clean_and_tokenize` function, it may take a little bit longer. To run the following cell, it takes around 5 to 10 mins. ###Code train = clean_and_tokenize(train) dev = clean_and_tokenize(dev) test = clean_and_tokenize(test) train.head() ###Output _____no_output_____ ###Markdown 1.3 PreprocessWe format our data in a specific way in order for the Gensen model to be able to ingest it. We do this by* Saving the tokens for each split in a `snli_1.0_{split}.txt.clean` file, with the sentence pairs and scores tab-separated and the tokens separated by a single space. Since some of the samples have invalid scores ("-"), we filter those out and save them separately in a `snli_1.0_{split}.txt.clean.noblank` file.* Saving the tokenized sentence and labels separately, in the form `snli_1.0_{split}.txt.s1.tok` or `snli_1.0_{split}.txt.s2.tok` or `snli_1.0_{split}.txt.lab`. ###Code preprocessed_data_dir = gensen_preprocess(train, dev, test, data_dir) print("Writing input data to {}".format(preprocessed_data_dir)) ###Output Writing input data to ./temp/data/clean/snli_1.0 ###Markdown 1.4 Upload to Azure Blob StorageWe upload the data from the local machine into the datastore so that it can be accessed for remote training. The datastore is a reference that points to a storage account, e.g. the Azure Blob Storage service. It can be attached to an AzureML workspace to facilitate data management operations such as uploading/downloading data or interacting with data from remote compute targets.**Note: If you already have the preprocessed files under `clean/snli_1.0/` in your default datastore, you DO NOT need to redo this section.** ###Code ds = ws.get_default_datastore() if AZUREML_VERBOSE: print("Datastore type: {}".format(ds.datastore_type)) print("Datastore account: {}".format(ds.account_name)) print("Datastore container: {}".format(ds.container_name)) print("Data reference: {}".format(ds.as_mount())) _ = ds.upload( src_dir=os.path.join(data_dir, "clean/snli_1.0"), overwrite=False, show_progress=AZUREML_VERBOSE, ) ###Output _____no_output_____ ###Markdown 2 Train GenSen with Distributed Pytorch and Horovod on AzureMLIn this tutorial, we train a GenSen model with PyTorch on AML using distributed training across a GPU cluster.After creating the workspace and setting up the development environment, training a model in Azure Machine Learning involves the following steps:1. Creating a remote compute target2. Preparing the training data and uploading it to datastore (Note that this was done in Section 1.4)3. Preparing the training script4. Creating Estimator and Experiment objects5. Submitting the Estimator to an Experiment attached to the AzureML workspace 2.1 Create or Attach a Remote Compute TargetWe create and attach a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecturecompute-target) for training the model. Here we use the AzureML-managed compute target ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute)) as our remote training compute resource. Our cluster autoscales from 0 to 2 `STANDARD_NC6` GPU nodes.Creating and configuring the AmlCompute cluster takes approximately 5 minutes the first time around. Once a cluster with the given configuration is created, it does not need to be created again.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Read more about the default limits and how to request more quota [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas). ###Code cluster_name = "gensen-aml" try: compute_target = ComputeTarget(workspace=ws, name=cluster_name) print("Found existing compute target {}".format(cluster_name)) except ComputeTargetException: print("Creating a new compute target {}...".format(cluster_name)) compute_config = AmlCompute.provisioning_configuration( vm_size="STANDARD_NC6", max_nodes=8 ) # create the cluster compute_target = ComputeTarget.create(ws, cluster_name, compute_config) compute_target.wait_for_completion(show_output=AZUREML_VERBOSE) if AZUREML_VERBOSE: print(compute_target.get_status().serialize()) ###Output Found existing compute target gensen-aml ###Markdown 2.2 Prepare the Training ScriptThe training process involves the following steps:1. Create or load the dataset vocabulary2. Train on the training dataset for each batch epoch (batch size = 48 updates)3. Evaluate on the validation dataset for every 10 epochs4. Find the local minimum point on validation loss5. Save the best model and stop the training processIn this section, we define the training script and move all necessary dependencies to `project_folder`, which will eventually be submitted to the remote compute target. Note that the size of the folder can not exceed 300Mb, so large dependencies such as pre-trained embeddings must be accessed from the datastore. ###Code project_folder = os.path.join(CACHE_DIR, "gensen") os.makedirs(project_folder, exist_ok=True) ###Output _____no_output_____ ###Markdown The script for distributed GenSen training is provided at `./gensen_train.py`.In this example, we use MLflow to log metrics. We also use the [AzureML-Mlflow](https://pypi.org/project/azureml-mlflow/) package to persist these metrics to the AzureML workspace. This is done with no change to the provided training script! Note that logging is done for loss *per minibatch*. Copy the training script `gensen_train.py` and config file `gensen_config.json` into the project folder. ###Code utils_folder = os.path.join(project_folder, "utils_nlp") _ = shutil.copytree(UTIL_NLP_PATH, utils_folder) _ = shutil.copy(TRAIN_SCRIPT, project_folder) _ =shutil.copy(CONFIG_PATH, project_folder) ###Output _____no_output_____ ###Markdown 2.3 Define the Estimator and Experiment 2.3.1 Create a PyTorch EstimatorThe Azure ML SDK's PyTorch Estimator allows us to submit PyTorch training jobs for both single-node and distributed runs. For more information on the PyTorch estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-pytorch).Note that `gensen_config.json` defines all the hyperparameters and paths when training GenSen model. The trained model will be saved in `models` to Azure Blob Storage. **Remember to clean the `models` folder in order to save new models.** ###Code if MAX_EPOCH: script_params = { "--config": CONFIG_PATH, "--data_folder": ws.get_default_datastore().as_mount(), "--max_epoch": MAX_EPOCH, } else: script_params = { "--config": CONFIG_PATH, "--data_folder": ws.get_default_datastore().as_mount(), } estimator = PyTorch( source_directory=project_folder, script_params=script_params, compute_target=compute_target, entry_script= TRAIN_SCRIPT, node_count=2, process_count_per_node=1, distributed_training=MpiConfiguration(), use_gpu=True, framework_version="1.1", conda_packages=["scikit-learn=0.20.3", "h5py", "nltk"], pip_packages=["azureml-mlflow>=1.0.43.1", "numpy>=1.16.0"], ) ###Output _____no_output_____ ###Markdown This Estimator specifies that the training script will run on `2` nodes, with one worker per node. In order to execute a distributed run using GPU, we must define `use_gpu` and `distributed_backend` to use MPI/Horovod. PyTorch, Horovod, and other necessary dependencies are installed automatically. If the training script makes use of packages that are not already defined in `.azureml/conda_dependencies.yml`, we must explicitly tell the estimator to install them via the constructor's `pip_packages` or `conda_packages` parameters.Note that if the estimator is being created for the first time, this step will take longer to run because the conda dependencies found under `.azureml/conda_dependencies.yml` must be installed from scratch. After the first run, it will use the existing conda environment and run the code directly. The training time will take around **2 hours** if you use the default value `max_epoch=None`, which means the training will stop if the local minimum loss has been found. User can specify the number of epochs for training.**Requirements:**- python=3.6.2- numpy=1.15.1- numpy-base=1.15.1- pip=10.0.1- python=3.6.6- python-dateutil=2.7.3- scikit-learn=0.20.3- azureml-defaults- h5py- nltk 2.3.2 Create the ExperimentCreate an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureexperiment) to track all the runs in the AzureML workspace for this tutorial. ###Code experiment_name = EXPERIMENT_NAME experiment = Experiment(ws, name=experiment_name) ###Output _____no_output_____ ###Markdown 2.4 Submit the Training Job to the Compute TargetWe can run the experiment by simply submitting the Estimator object to the compute target. Note that this call is asynchronous. ###Code run = experiment.submit(estimator) if AZUREML_VERBOSE: print(run) ###Output _____no_output_____ ###Markdown 2.4.1 Monitor the RunWe can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes. The widget automatically plots and visualizes the loss metric that we logged to the AzureML workspace. ###Code RunDetails(run).show() _ = run.wait_for_completion(show_output=AZUREML_VERBOSE) # Block until the script has completed training. ###Output _____no_output_____ ###Markdown 2.4.2 Interpret the Training ResultsThe following chart shows the model validation loss with different node configurations on AmlCompute. We find that the minimum validation loss decreases as the number of nodes increases; that is, the performance scales with the number of nodes in the cluster.| Standard_NC6 | AML_1node | AML_2nodes | AML_4nodes | AML_8nodes || --- | --- | --- | --- | --- || min_val_loss | 4.81 | 4.78 | 4.77 | 4.58 |We also observe common tradeoffs associated with distributed training. We make use of [Horovod](https://github.com/horovod/horovod), a distributed training tool for many popular deep learning frameworks that enables parallelization of work across the nodes in the cluster. Distributed training decreases the time it takes for the model to converge in theory, but the model may also take more time in communicating with each node. Note that the communication time will eventually become negligible when training on larger and larger datasets, but being aware of this tradeoff is helpful for choosing the node configuration when training on smaller datasets. 3 Tune Model HyperparametersNow that we've seen how to do a simple PyTorch training run using the SDK, let's see if we can further improve the accuracy of our model. We can optimize our model's hyperparameters using Azure Machine Learning's hyperparameter tuning capabilities. 3.1 Start a Hyperparameter SweepFirst, we define the hyperparameter space to sweep over. Since the training script uses a learning rate schedule to decay the learning rate every several epochs, we can tune the initial learning rate parameter. In this example we will use random sampling to try different configuration sets of hyperparameters to minimize our primary metric, the best validation loss.Then, we specify the early termination policy to use to early terminate poorly performing runs. Here we use the `BanditPolicy`, which terminates any run that doesn't fall within the slack factor of our primary evaluation metric. In this tutorial, we will apply this policy every epoch (since we report our the validation loss metric every epoch and `evaluation_interval=1`). Note that we explicitly define `delay_evaluation` such that the first policy evaluation does not occur until after the 10th epoch.Refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-tune-hyperparametersspecify-an-early-termination-policy) for more information on the BanditPolicy and other policies available. ###Code param_sampling = RandomParameterSampling({"learning_rate": uniform(0.0001, 0.001)}) early_termination_policy = BanditPolicy( slack_factor=0.15, evaluation_interval=1, delay_evaluation=10 ) hyperdrive_config = HyperDriveConfig( estimator=estimator, hyperparameter_sampling=param_sampling, policy=early_termination_policy, primary_metric_name="min_val_loss", primary_metric_goal=PrimaryMetricGoal.MINIMIZE, max_total_runs=MAX_TOTAL_RUNS, max_concurrent_runs=MAX_CONCURRENT_RUNS, ) ###Output _____no_output_____ ###Markdown Finally, lauch the hyperparameter tuning job. ###Code hyperdrive_run = experiment.submit(hyperdrive_config) # Start the HyperDrive run ###Output _____no_output_____ ###Markdown 3.2 Monitor HyperDrive RunsWe can monitor the progress of the runs with a Jupyter widget, or again block until the run has completed. ###Code RunDetails(hyperdrive_run).show() _ = hyperdrive_run.wait_for_completion(show_output=AZUREML_VERBOSE) # Block until complete ###Output _____no_output_____ ###Markdown 3.2.1 Interpret the Tuning ResultsThe chart below shows 4 different threads running in parallel with different learning rates. The number of total runs is 8. We pick the best learning rate by minimizing the validation loss. The HyperDrive run automatically shows the tracking charts (example in the following) to facilitate visualization of the tuning process.![Tuning](https://nlpbp.blob.core.windows.net/images/gensen_tune1.PNG)![Tuning](https://nlpbp.blob.core.windows.net/images/gensen_tune2.PNG)**From the results in section [2.3.5 Monitor your run](2.4.1-Monitor-your-run), the best validation loss for 1 node is 4.81, but with tuning we can easily achieve better performance around 4.65.** 3.3 Find the Best Model Once all the runs complete, we can find the run that produced the model with the lowest loss. ###Code best_run = hyperdrive_run.get_best_run_by_primary_metric() best_run_metrics = best_run.get_metrics() print( "Best Run:\n Validation loss: {0:.5f} \n Learning rate: {1:.5f} \n".format( best_run_metrics["min_val_loss"], best_run_metrics["learning_rate"] ) ) # Persist properties of the run so we can access the logged metrics later sb.glue("min_val_loss", best_run_metrics['min_val_loss']) sb.glue("learning_rate", best_run_metrics['learning_rate']) ###Output _____no_output_____ ###Markdown 1.4 Upload to Azure Blob StorageWe upload the data from the local machine into the datastore so that it can be accessed for remote training. The datastore is a reference that points to a storage account, e.g. the Azure Blob Storage service. It can be attached to an AzureML workspace to facilitate data management operations such as uploading/downloading data or interacting with data from remote compute targets.**Note: If you already have the preprocessed files under `clean/snli_1.0/` in your default datastore, you DO NOT need to redo this section.** ###Code ds = ws.get_default_datastore() if AZUREML_VERBOSE: print("Datastore type: {}".format(ds.datastore_type)) print("Datastore account: {}".format(ds.account_name)) print("Datastore container: {}".format(ds.container_name)) print("Data reference: {}".format(ds.as_mount())) _ = ds.upload( src_dir=os.path.join(data_dir, "clean/snli_1.0"), overwrite=False, show_progress=AZUREML_VERBOSE, ) ###Output _____no_output_____ ###Markdown 2 Train GenSen with Distributed Pytorch and Horovod on AzureMLIn this tutorial, we train a GenSen model with PyTorch on AML using distributed training across a GPU cluster.After creating the workspace and setting up the development environment, training a model in Azure Machine Learning involves the following steps:1. Creating a remote compute target2. Preparing the training data and uploading it to datastore (Note that this was done in Section 1.4)3. Preparing the training script4. Creating Estimator and Experiment objects5. Submitting the Estimator to an Experiment attached to the AzureML workspace 2.1 Create or Attach a Remote Compute TargetWe create and attach a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecturecompute-target) for training the model. Here we use the AzureML-managed compute target ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute)) as our remote training compute resource. Our cluster autoscales from 0 to 2 `STANDARD_NC6` GPU nodes.Creating and configuring the AmlCompute cluster takes approximately 5 minutes the first time around. Once a cluster with the given configuration is created, it does not need to be created again.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Read more about the default limits and how to request more quota [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas). ###Code cluster_name = "gensen-aml" try: compute_target = ComputeTarget(workspace=ws, name=cluster_name) print("Found existing compute target {}".format(cluster_name)) except ComputeTargetException: print("Creating a new compute target {}...".format(cluster_name)) compute_config = AmlCompute.provisioning_configuration( vm_size="STANDARD_NC6", max_nodes=8 ) # create the cluster compute_target = ComputeTarget.create(ws, cluster_name, compute_config) compute_target.wait_for_completion(show_output=AZUREML_VERBOSE) if AZUREML_VERBOSE: print(compute_target.get_status().serialize()) ###Output Found existing compute target gensen-aml ###Markdown 2.2 Prepare the Training ScriptThe training process involves the following steps:1. Create or load the dataset vocabulary2. Train on the training dataset for each batch epoch (batch size = 48 updates)3. Evaluate on the validation dataset for every 10 epochs4. Find the local minimum point on validation loss5. Save the best model and stop the training processIn this section, we define the training script and move all necessary dependencies to `project_folder`, which will eventually be submitted to the remote compute target. Note that the size of the folder can not exceed 300Mb, so large dependencies such as pre-trained embeddings must be accessed from the datastore. ###Code project_folder = os.path.join(CACHE_DIR, "gensen") os.makedirs(project_folder, exist_ok=True) ###Output _____no_output_____ ###Markdown The script for distributed GenSen training is provided at `./gensen_train.py`.In this example, we use MLflow to log metrics. We also use the [AzureML-Mlflow](https://pypi.org/project/azureml-mlflow/) package to persist these metrics to the AzureML workspace. This is done with no change to the provided training script! Note that logging is done for loss *per minibatch*. Copy the training script `gensen_train.py` and config file `gensen_config.json` into the project folder. ###Code utils_folder = os.path.join(project_folder, "utils_nlp") _ = shutil.copytree(UTIL_NLP_PATH, utils_folder) _ = shutil.copy(TRAIN_SCRIPT, os.path.join(utils_folder, "gensen")) _ = shutil.copy(CONFIG_PATH, os.path.join(utils_folder, "gensen")) ###Output _____no_output_____ ###Markdown 2.3 Define the Estimator and Experiment 2.3.1 Create a PyTorch EstimatorThe Azure ML SDK's PyTorch Estimator allows us to submit PyTorch training jobs for both single-node and distributed runs. For more information on the PyTorch estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-pytorch).Note that `gensen_config.json` defines all the hyperparameters and paths when training GenSen model. The trained model will be saved in `models` to Azure Blob Storage. **Remember to clean the `models` folder in order to save new models.** ###Code if MAX_EPOCH: script_params = { "--config": "utils_nlp/gensen/gensen_config.json", "--data_folder": ws.get_default_datastore().as_mount(), "--max_epoch": MAX_EPOCH, } else: script_params = { "--config": "utils_nlp/gensen/gensen_config.json", "--data_folder": ws.get_default_datastore().as_mount(), } estimator = PyTorch( source_directory=project_folder, script_params=script_params, compute_target=compute_target, entry_script=ENTRY_SCRIPT, node_count=2, process_count_per_node=1, distributed_training=MpiConfiguration(), use_gpu=True, framework_version="1.1", conda_packages=["scikit-learn=0.20.3", "h5py", "nltk"], pip_packages=["azureml-mlflow>=1.0.43.1", "numpy>=1.16.0"], ) ###Output _____no_output_____ ###Markdown This Estimator specifies that the training script will run on `2` nodes, with one worker per node. In order to execute a distributed run using GPU, we must define `use_gpu` and `distributed_backend` to use MPI/Horovod. PyTorch, Horovod, and other necessary dependencies are installed automatically. If the training script makes use of packages that are not already defined in `.azureml/conda_dependencies.yml`, we must explicitly tell the estimator to install them via the constructor's `pip_packages` or `conda_packages` parameters.Note that if the estimator is being created for the first time, this step will take longer to run because the conda dependencies found under `.azureml/conda_dependencies.yml` must be installed from scratch. After the first run, it will use the existing conda environment and run the code directly. The training time will take around **2 hours** if you use the default value `max_epoch=None`, which means the training will stop if the local minimum loss has been found. User can specify the number of epochs for training.**Requirements:**- python=3.6.2- numpy=1.15.1- numpy-base=1.15.1- pip=10.0.1- python=3.6.6- python-dateutil=2.7.3- scikit-learn=0.20.3- azureml-defaults- h5py- nltk 2.3.2 Create the ExperimentCreate an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architectureexperiment) to track all the runs in the AzureML workspace for this tutorial. ###Code experiment_name = EXPERIMENT_NAME experiment = Experiment(ws, name=experiment_name) ###Output _____no_output_____ ###Markdown 2.4 Submit the Training Job to the Compute TargetWe can run the experiment by simply submitting the Estimator object to the compute target. Note that this call is asynchronous. ###Code run = experiment.submit(estimator) if AZUREML_VERBOSE: print(run) ###Output _____no_output_____ ###Markdown 2.4.1 Monitor the RunWe can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes. The widget automatically plots and visualizes the loss metric that we logged to the AzureML workspace. ###Code RunDetails(run).show() _ = run.wait_for_completion(show_output=AZUREML_VERBOSE) # Block until the script has completed training. ###Output _____no_output_____ ###Markdown 2.4.2 Interpret the Training ResultsThe following chart shows the model validation loss with different node configurations on AmlCompute. We find that the minimum validation loss decreases as the number of nodes increases; that is, the performance scales with the number of nodes in the cluster.| Standard_NC6 | AML_1node | AML_2nodes | AML_4nodes | AML_8nodes || --- | --- | --- | --- | --- || min_val_loss | 4.81 | 4.78 | 4.77 | 4.58 |We also observe common tradeoffs associated with distributed training. We make use of [Horovod](https://github.com/horovod/horovod), a distributed training tool for many popular deep learning frameworks that enables parallelization of work across the nodes in the cluster. Distributed training decreases the time it takes for the model to converge in theory, but the model may also take more time in communicating with each node. Note that the communication time will eventually become negligible when training on larger and larger datasets, but being aware of this tradeoff is helpful for choosing the node configuration when training on smaller datasets. 3 Tune Model HyperparametersNow that we've seen how to do a simple PyTorch training run using the SDK, let's see if we can further improve the accuracy of our model. We can optimize our model's hyperparameters using Azure Machine Learning's hyperparameter tuning capabilities. 3.1 Start a Hyperparameter SweepFirst, we define the hyperparameter space to sweep over. Since the training script uses a learning rate schedule to decay the learning rate every several epochs, we can tune the initial learning rate parameter. In this example we will use random sampling to try different configuration sets of hyperparameters to minimize our primary metric, the best validation loss.Then, we specify the early termination policy to use to early terminate poorly performing runs. Here we use the `BanditPolicy`, which terminates any run that doesn't fall within the slack factor of our primary evaluation metric. In this tutorial, we will apply this policy every epoch (since we report our the validation loss metric every epoch and `evaluation_interval=1`). Note that we explicitly define `delay_evaluation` such that the first policy evaluation does not occur until after the 10th epoch.Refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-tune-hyperparametersspecify-an-early-termination-policy) for more information on the BanditPolicy and other policies available. ###Code param_sampling = RandomParameterSampling({"learning_rate": uniform(0.0001, 0.001)}) early_termination_policy = BanditPolicy( slack_factor=0.15, evaluation_interval=1, delay_evaluation=10 ) hyperdrive_config = HyperDriveConfig( estimator=estimator, hyperparameter_sampling=param_sampling, policy=early_termination_policy, primary_metric_name="min_val_loss", primary_metric_goal=PrimaryMetricGoal.MINIMIZE, max_total_runs=MAX_TOTAL_RUNS, max_concurrent_runs=MAX_CONCURRENT_RUNS, ) ###Output _____no_output_____ ###Markdown Finally, lauch the hyperparameter tuning job. ###Code hyperdrive_run = experiment.submit(hyperdrive_config) # Start the HyperDrive run ###Output _____no_output_____ ###Markdown 3.2 Monitor HyperDrive RunsWe can monitor the progress of the runs with a Jupyter widget, or again block until the run has completed. ###Code RunDetails(hyperdrive_run).show() _ = hyperdrive_run.wait_for_completion(show_output=AZUREML_VERBOSE) # Block until complete ###Output _____no_output_____ ###Markdown 3.2.1 Interpret the Tuning ResultsThe chart below shows 4 different threads running in parallel with different learning rates. The number of total runs is 8. We pick the best learning rate by minimizing the validation loss. The HyperDrive run automatically shows the tracking charts (example in the following) to facilitate visualization of the tuning process.![Tuning](https://nlpbp.blob.core.windows.net/images/gensen_tune1.PNG)![Tuning](https://nlpbp.blob.core.windows.net/images/gensen_tune2.PNG)**From the results in section [2.3.5 Monitor your run](2.4.1-Monitor-your-run), the best validation loss for 1 node is 4.81, but with tuning we can easily achieve better performance around 4.65.** 3.3 Find the Best Model Once all the runs complete, we can find the run that produced the model with the lowest loss. ###Code best_run = hyperdrive_run.get_best_run_by_primary_metric() best_run_metrics = best_run.get_metrics() print( "Best Run:\n Validation loss: {0:.5f} \n Learning rate: {1:.5f} \n".format( best_run_metrics["min_val_loss"], best_run_metrics["learning_rate"] ) ) # Persist properties of the run so we can access the logged metrics later sb.glue("min_val_loss", best_run_metrics['min_val_loss']) sb.glue("learning_rate", best_run_metrics['learning_rate']) ###Output _____no_output_____
misc/LR_tfidf_gridsearch.ipynb
###Markdown Viewings ###Code def read_pickle(filename: str): '''Read pickle to get the info''' list_pickle = pickle.load(open(filename,"rb")) return list_pickle train_tf_idf = read_pickle('train_tf_idf.pickle') dev_tf_idf = read_pickle('dev_tf_idf.pickle') test_tf_idf = read_pickle('test_tf_idf.pickle') train_tf_idf[0] (len(train_tf_idf), len(dev_tf_idf), len(test_tf_idf)) (len(test_tf_idf[0][2]), len(test_tf_idf[0][3])) def bool_list(batch): '''Input : train_tf_idf or dev_tf_idf or test_tf_idf Output: list of integers corresponding to Y vector, 0 for False, 1 for True''' list_bool = [list(elem[4]) for elem in batch] preprocessed_list_bool = [] for boolean in list_bool: if boolean == [False]: preprocessed_list_bool.append(0) elif boolean ==[1]: preprocessed_list_bool.append(1) else: preprocessed_list_bool.append(int(boolean[0])) return preprocessed_list_bool sns.countplot(x=bool_list(train_tf_idf)) sns.countplot(x=bool_list(dev_tf_idf)) sns.countplot(x=bool_list(test_tf_idf)) ###Output _____no_output_____ ###Markdown There is some strange values in test_tf_idf (2 and 3) should I consider them as 1 ? I will assume that. Modelisation ###Code def get_X_vectors(batch): '''Input : batch Output : Array of vectors''' X = [] for i in range(len(batch)): X.append(np.concatenate((batch[i][2], batch[i][3]))) return np.array(X) X_train = get_X_vectors(train_tf_idf) X_train.shape X_test = get_X_vectors(test_tf_idf) X_test.shape def get_y_vector(batch): '''Input : batch Output : array of integers (0 or 1)''' list_bool = [list(elem[4]) for elem in batch] preprocessed_list_bool = [] for boolean in list_bool: if boolean == [False]: preprocessed_list_bool.append(0) else: preprocessed_list_bool.append(1) return np.array(preprocessed_list_bool) y_train = get_y_vector(train_tf_idf) y_train.size y_test = get_y_vector(test_tf_idf) y_test.size import warnings warnings.filterwarnings('ignore') model = LogisticRegression() space = dict() space['C'] = np.logspace(-3,3,5) space['penalty'] = ['l1', 'l2'] space['solver'] = ['newton-cg', 'lbfgs', 'liblinear'] search = GridSearchCV(model, space, scoring='f1', n_jobs=3, cv=5) result = search.fit(X_train, y_train) print('Best Score: %s' % result.best_score_) print('Best Hyperparameters: %s' % result.best_params_) logreg = LogisticRegression(penalty='l1', C=1.0, solver='liblinear') logreg.fit(X_train, y_train) y_pred = logreg.predict(X_test) print("F1-score:",metrics.f1_score(y_test, y_pred)) y_pred_scores = logreg.predict_proba(X_test) y_pred_scores # Creating a pickle of results def LR_results(batch, y_pred_scores): if len(batch)!=len(y_pred_scores): raise ValueError('Array are not of the same size') LR_results = [(batch[i][0], batch[i][1], y_pred_scores[i, 0]) for i in range(len(batch))] return LR_results results = LR_results(test_tf_idf, y_pred_scores) results[0:5] outfile = open('LR_results.pickle', 'wb') pickle.dump(results, outfile) outfile.close() print(f"Execution time : {time.strftime('%H:%M:%S', time.gmtime(time.time()-t))}") ###Output Execution time : 00:28:13
Módulo04Aula11.ipynb
###Markdown ###Code import pandas as pd dado_hist = { 'Empresa 1': [4,9,6,5], 'Empresa 2': [9,6,4,15], 'Empresa 3': [6,4,2,25], 'Empresa 4': [4,2,4,35], 'Empresa 5': [2,4,9,45] } df = pd.DataFrame(dado_hist) print(df) from IPython.display import display, HTML display(HTML(df.to_html())) dfa = df.head(2) display(HTML(dfa.to_html())) dft = df.tail(2) display(HTML(dft.to_html())) ###Output _____no_output_____
onnxruntime/python/tools/bert/notebooks/Tensorflow_Keras_Bert-Squad_OnnxRuntime_CPU.ipynb
###Markdown 3. Export model to ONNX using Keras2onnxNow we use Keras2onnx to export the model to ONNX format. It takes about 18 minutes for the large model. ###Code import keras2onnx output_model_path = os.path.join(output_dir, 'keras_{}.onnx'.format(model_name_or_path)) if enable_overwrite or not os.path.exists(output_model_path): model.predict(inputs) start = time.time() onnx_model = keras2onnx.convert_keras(model, model.name) keras2onnx.save_model(onnx_model, output_model_path) print("Keras2onnx run time = {} s".format(format(time.time() - start, '.2f'))) ###Output The node number after optimization: 5257 -> 3836 ###Markdown 4. Inference the Exported Model with ONNX Runtime OpenMP Environment VariableOpenMP environment variable is important for CPU inference of Bert models. After running this notebook, you can find the best setting from [Performance Test Tool](Performance-Test-Tool) result for your machine.Setting environment variables shall be done before importing onnxruntime. Otherwise, they might not take effect. ###Code import os import psutil # You may change the settings in this cell according to Performance Test Tool result after running the whole notebook. use_openmp = True # ATTENTION: these environment variables must be set before importing onnxruntime. if use_openmp: os.environ["OMP_NUM_THREADS"] = str(psutil.cpu_count(logical=True)) else: os.environ["OMP_NUM_THREADS"] = '1' os.environ["OMP_WAIT_POLICY"] = 'ACTIVE' ###Output _____no_output_____ ###Markdown Now we are ready to inference the model with ONNX Runtime. Here we can see that OnnxRuntime has better performance than TensorFlow for this example even without optimization. ###Code import psutil import onnxruntime import numpy # User might use onnxruntime-gpu for CPU inference. if use_openmp and 'CUDAExecutionProvider' in onnxruntime.get_available_providers(): print("warning: onnxruntime-gpu is not built with OpenMP. You might try onnxruntime package.") sess_options = onnxruntime.SessionOptions() # The following settings enables OpenMP, which is required to get best performance for CPU inference of Bert models. if use_openmp: sess_options.intra_op_num_threads=1 else: sess_options.intra_op_num_threads=psutil.cpu_count(logical=True) # Providers is optional. Only needed when you use onnxruntime-gpu for CPU inference. session = onnxruntime.InferenceSession(output_model_path, sess_options, providers=['CPUExecutionProvider']) # Use contiguous array as input could improve performance. inputs_onnx = {k_: numpy.ascontiguousarray(v_.numpy()) for k_, v_ in inputs.items()} # Warm up with one run. results = session.run(None, inputs_onnx) # Measure the latency. start = time.time() for _ in range(total_runs): results = session.run(None, inputs_onnx) end = time.time() print("ONNX Runtime cpu inference time for sequence length {} (model not optimized): {} ms".format(num_tokens, format((end - start) * 1000 / total_runs, '.2f'))) del session print("***** Verifying correctness (TensorFlow and ONNX Runtime) *****") print('start_scores are close:', numpy.allclose(results[0], start_scores.cpu(), rtol=1e-05, atol=1e-04)) print('end_scores are close:', numpy.allclose(results[1], end_scores.cpu(), rtol=1e-05, atol=1e-04)) ###Output ***** Verifying correctness (TensorFlow and ONNX Runtime) ***** WARNING:tensorflow:From <ipython-input-10-453158d8869f>:2: _EagerTensorBase.cpu (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.identity instead. start_scores are close: True end_scores are close: True ###Markdown 5. Model Optimization[ONNX Runtime BERT Model Optimization Tools](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/bert) is a set of tools for optimizing and testing BERT models. Let's try some of them on the exported models. BERT Optimization ScriptThe script **bert_model_optimization.py** can help optimize BERT model exported by PyTorch, tf2onnx or keras2onnx. Since our model is exported by keras2onnx, we shall use **--model_type bert_keras** parameter.It will also tell whether the model is fully optimized or not. If not, that means you might need change the script to fuse some new pattern of subgraph. ###Code optimized_model_path = os.path.join(output_dir, 'keras_bert_large_opt_cpu.onnx') %run bert_scripts/bert_model_optimization.py --input $output_model_path --output $optimized_model_path --model_type bert_keras --num_heads 16 --hidden_size 1024 ###Output BertOnnxModelTF.py: Fused LayerNormalization count: 49 BertOnnxModelKeras.py: Fused Gelu count:24 BertOnnxModelKeras.py: start processing embedding layer... BertOnnxModelKeras.py: Found word embedding. name:tf_bert_for_question_answering/bert/embeddings/Gather/resource:0, shape:(30522, 1024) BertOnnxModelKeras.py: Found word embedding. name:tf_bert_for_question_answering/bert/embeddings/position_embeddings/embedding_lookup/413066:0, shape:(512, 1024) BertOnnxModelKeras.py: Found segment embedding. name:tf_bert_for_question_answering/bert/embeddings/token_type_embeddings/embedding_lookup/413071:0, shape:(2, 1024) BertOnnxModelKeras.py: Create Embedding node OnnxModel.py: Graph pruned: 0 inputs, 0 outputs and 9 nodes are removed BertOnnxModelKeras.py: Fused mask BertOnnxModelKeras.py: Skip consequent Reshape count: 24 BertOnnxModel.py: Fused Reshape count:0 BertOnnxModel.py: Fused SkipLayerNormalization count: 48 BertOnnxModelKeras.py: Fused Attention count:24 BertOnnxModel.py: Fused SkipLayerNormalization with Bias count:24 BertOnnxModelKeras.py: Remove 96 Reshape nodes. OnnxModel.py: Graph pruned: 0 inputs, 0 outputs and 2160 nodes are removed BertOnnxModel.py: opset verion: 11 OnnxModel.py: Output model to ./output\keras_bert_large_opt_cpu.onnx BertOnnxModel.py: EmbedLayer=1, Attention=24, Gelu=24, LayerNormalization=48, Succesful=True bert_model_optimization.py: The output model is fully optimized. ###Markdown We run the optimized model using same inputs. The inference latency is reduced after optimization. The output result is the same as the one before optimization. ###Code session = onnxruntime.InferenceSession(optimized_model_path, sess_options) # use one run to warm up a session session.run(None, inputs_onnx) # measure the latency. start = time.time() for _ in range(total_runs): opt_results = session.run(None, inputs_onnx) end = time.time() print("ONNX Runtime cpu inference time on optimized model: {} ms".format(format((end - start) * 1000 / total_runs, '.2f'))) del session print("***** Verifying correctness (before and after optimization) *****") print('start_scores are close:', numpy.allclose(opt_results[0], start_scores.cpu(), rtol=1e-05, atol=1e-04)) print('end_scores are close:', numpy.allclose(opt_results[1], end_scores.cpu(), rtol=1e-05, atol=1e-04)) ###Output ***** Verifying correctness (before and after optimization) ***** start_scores are close: True end_scores are close: True ###Markdown Model Results Comparison ToolIf your BERT model has three inputs, a script compare_bert_results.py can be used to do a quick verification. The tool will generate some fake input data, and compare results from both the original and optimized models. If outputs are all close, it is safe to use the optimized model.Example of comparing the models before and after optimization: ###Code # The base model is exported using sequence length 26 %run ./bert_scripts/compare_bert_results.py --baseline_model $output_model_path --optimized_model $optimized_model_path --batch_size 1 --sequence_length 26 --samples 10 ###Output 100% passed for 10 random inputs given thresholds (rtol=0.001, atol=0.0001). maximum absolute difference=2.3484230041503906e-05 maximum relative difference=0.00013404049968812615 ###Markdown Performance Test ToolThis tool measures performance of BERT model inference using OnnxRuntime Python API.The following command will create 100 samples of batch_size 1 and sequence length 128 to run inference, then calculate performance numbers like average latency and throughput etc. It takes about 20 minutes to run this test. You can remove --all to reduce number of settings in the test. ###Code %run ./bert_scripts/bert_perf_test.py --model $optimized_model_path --batch_size 1 --sequence_length 128 --samples 100 --test_times 1 --inclusive --all ###Output Generating 100 samples for batch_size=1 sequence_length=128 Extra latency for converting inputs to contiguous: 0.04 ms Test summary is saved to output\perf_results_CPU_B1_S128_20200319-141051.txt ###Markdown Let's load the summary file and take a look. In this machine, the best result is achieved by OpenMP. The best setting might be difference using different hardware or model. ###Code import glob import pandas latest_result_file = max(glob.glob(os.path.join(output_dir, "perf_results_*.txt")), key=os.path.getmtime) result_data = pandas.read_table(latest_result_file, converters={'OMP_NUM_THREADS': str, 'OMP_WAIT_POLICY':str}) print(latest_result_file) print("The best setting is: {} openmp; {} contiguous array".format('use' if result_data['intra_op_num_threads'].iloc[0] == 1 else 'NO', 'use' if result_data['contiguous'].iloc[0] else 'NO')) result_data.drop(['model', 'graph_optimization_level', 'batch_size', 'sequence_length', 'test_cases', 'test_times', 'use_gpu', 'warmup'], axis=1, inplace=True) result_data.drop(['Latency_P50', 'Latency_P75', 'Latency_P90', 'Latency_P95'], axis=1, inplace=True) cols = result_data.columns.tolist() cols = cols[-4:] + cols[:-4] result_data = result_data[cols] result_data ###Output ./output\perf_results_CPU_B1_S128_20200319-141051.txt The best setting is: use openmp; NO contiguous array ###Markdown 6. Additional InfoNote that running Jupyter Notebook has slight impact on performance result since Jupyter Notebook is using system resources like CPU and memory etc. It is recommended to close Jupyter Notebook and other applications, then run the performance test tool in a console to get more accurate performance numbers.[OnnxRuntime C API](https://github.com/microsoft/onnxruntime/blob/master/docs/C_API.md) could get slightly better performance than python API. If you use C API in inference, you can use OnnxRuntime_Perf_Test.exe built from source to measure performance instead.Here is the machine configuration that generated the above results. The machine has GPU but not used in CPU inference.You might get slower or faster result based on your hardware. ###Code %run ./bert_scripts/MachineInfo.py --silent ###Output { "gpu": { "driver_version": "441.22", "devices": [ { "memory_total": 8589934592, "memory_available": 611880960, "name": "GeForce GTX 1070" } ] }, "cpu": { "brand": "Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz", "cores": 6, "logical_cores": 12, "hz": "3.1920 GHz", "l2_cache": "1536 KB", "l3_cache": "12288 KB", "processor": "Intel64 Family 6 Model 158 Stepping 10, GenuineIntel" }, "memory": { "total": 16971259904, "available": 6245142528 }, "python": "3.6.10.final.0 (64 bit)", "os": "Windows-10-10.0.18362-SP0", "onnxruntime": { "version": "1.2.0", "support_gpu": false }, "pytorch": { "version": "1.4.0+cpu", "support_gpu": false }, "tensorflow": { "version": "2.1.0", "git_version": "v2.1.0-rc2-17-ge5bf8de410", "support_gpu": true } } ###Markdown Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. Inference TensorFlow Bert Model with ONNX Runtime on CPU In this tutorial, you'll be introduced to how to load a Bert model using TensorFlow, convert it to ONNX using Keras2onnx, and inference it for high performance using ONNX Runtime. In the following sections, we are going to use the Bert model trained with Stanford Question Answering Dataset (SQuAD) dataset as an example. Bert SQuAD model is used in question answering scenarios, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. 0. Prerequisites First we need a python environment before running this notebook.You can install [AnaConda](https://www.anaconda.com/distribution/) and [Git](https://git-scm.com/downloads) and open an AnaConda console when it is done. Then you can run the following commands to create a conda environment named cpu_env:```consoleconda create -n cpu_env python=3.6conda activate cpu_envconda install -c anaconda ipykernelconda install -c conda-forge ipywidgetspython -m ipykernel install --user --name=cpu_env```Finally, launch Jupyter Notebook and you can choose cpu_env as kernel to run this notebook.Let's install [Tensorflow](https://www.tensorflow.org/install), [OnnxRuntime](https://microsoft.github.io/onnxruntime/), Keras2Onnx and other packages like the following: ###Code import sys !{sys.executable} -m pip install --quiet --upgrade tensorflow==2.1.0 !{sys.executable} -m pip install --quiet --upgrade onnxruntime # Install keras2onnx from source, since the latest package (1.6.0) does not support bert models from tensorflow 2.1 currently. !{sys.executable} -m pip install --quiet git+https://github.com/microsoft/onnxconverter-common !{sys.executable} -m pip install --quiet git+https://github.com/onnx/keras-onnx # Install other packages used in this notebook. !{sys.executable} -m pip install --quiet transformers==2.5.1 !{sys.executable} -m pip install --quiet wget psutil onnx pytz pandas py-cpuinfo py3nvml # Whether allow overwrite existing script or model. enable_overwrite = True # Number of runs to get average latency. total_runs = 100 import os import wget cache_dir = "./squad" output_dir = "./output" script_dir = './bert_scripts' for directory in [cache_dir, output_dir, script_dir]: if not os.path.exists(directory): os.makedirs(directory) # Download scripts for BERT optimization. url_prfix = "https://raw.githubusercontent.com/microsoft/onnxruntime/master/onnxruntime/python/tools/bert/" script_files = ['bert_perf_test.py', 'bert_test_data.py', 'compare_bert_results.py', 'BertOnnxModel.py', 'BertOnnxModelKeras.py', 'BertOnnxModelTF.py', 'OnnxModel.py', 'bert_model_optimization.py'] for filename in script_files: target_file = os.path.join(script_dir, filename) if enable_overwrite and os.path.exists(target_file): os.remove(target_file) if not os.path.exists(target_file): wget.download(url_prfix + filename, target_file) print("Downloaded", filename) ###Output 100% [..............................................................................] 15310 / 15310Downloaded bert_perf_test.py 100% [................................................................................] 9571 / 9571Downloaded bert_test_data.py 100% [................................................................................] 7272 / 7272Downloaded compare_bert_results.py 100% [..............................................................................] 44905 / 44905Downloaded BertOnnxModel.py 100% [..............................................................................] 21565 / 21565Downloaded BertOnnxModelKeras.py 100% [..............................................................................] 26114 / 26114Downloaded BertOnnxModelTF.py 100% [..............................................................................] 22773 / 22773Downloaded OnnxModel.py 100% [................................................................................] 7795 / 7795Downloaded bert_model_optimization.py ###Markdown 1. Load Pretrained Bert model Start to load fine-tuned model. This step take a few minutes to download the model (1.3 GB) for the first time. ###Code from transformers import (TFBertForQuestionAnswering, BertTokenizer) model_name_or_path = 'bert-large-uncased-whole-word-masking-finetuned-squad' # Load model and tokenizer tokenizer = BertTokenizer.from_pretrained(model_name_or_path, do_lower_case=True, cache_dir=cache_dir) model = TFBertForQuestionAnswering.from_pretrained(model_name_or_path, cache_dir=cache_dir) ###Output _____no_output_____ ###Markdown 2. TensorFlow InferenceUse one example to run inference using TensorFlow as baseline. ###Code import numpy question, text = "What is ONNX Runtime?", "ONNX Runtime is a performance-focused inference engine for ONNX models." inputs = tokenizer.encode_plus(question, text, add_special_tokens=True, return_tensors='tf') start_scores, end_scores = model(inputs) num_tokens = len(inputs["input_ids"][0]) all_tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) print("The answer is:", ' '.join(all_tokens[numpy.argmax(start_scores) : numpy.argmax(end_scores)+1])) import time start = time.time() for _ in range(total_runs): start_scores, end_scores = model(inputs) end = time.time() print("Tensorflow Inference time for sequence length {} = {} ms".format(num_tokens, format((end - start) * 1000 / total_runs, '.2f'))) ###Output Tensorflow Inference time for sequence length 26 = 227.06 ms ###Markdown 3. Export model to ONNX using Keras2onnxNow we use Keras2onnx to export the model to ONNX format. It takes about 18 minutes for the large model. ###Code import keras2onnx output_model_path = os.path.join(output_dir, 'keras_{}.onnx'.format(model_name_or_path)) if enable_overwrite or not os.path.exists(output_model_path): model.predict(inputs) start = time.time() onnx_model = keras2onnx.convert_keras(model, model.name) keras2onnx.save_model(onnx_model, output_model_path) print("Keras2onnx run time = {} s".format(format(time.time() - start, '.2f'))) ###Output The node number after optimization: 5257 -> 3836 ###Markdown 4. Inference the Exported Model with ONNX Runtime OpenMP Environment VariableOpenMP environment variable is important for CPU inference of Bert models. After running this notebook, you can find the best setting from [Performance Test Tool](Performance-Test-Tool) result for your machine.Setting environment variables shall be done before importing onnxruntime. Otherwise, they might not take effect. ###Code import os import psutil # You may change the settings in this cell according to Performance Test Tool result after running the whole notebook. use_openmp = True # ATTENTION: these environment variables must be set before importing onnxruntime. if use_openmp: os.environ["OMP_NUM_THREADS"] = str(psutil.cpu_count(logical=True)) else: os.environ["OMP_NUM_THREADS"] = '1' os.environ["OMP_WAIT_POLICY"] = 'ACTIVE' ###Output _____no_output_____ ###Markdown Now we are ready to inference the model with ONNX Runtime. Here we can see that OnnxRuntime has better performance than TensorFlow for this example even without optimization. ###Code import psutil import onnxruntime import numpy # User might use onnxruntime-gpu for CPU inference. if use_openmp and 'CUDAExecutionProvider' in onnxruntime.get_available_providers(): print("warning: onnxruntime-gpu is not built with OpenMP. You might try onnxruntime package.") sess_options = onnxruntime.SessionOptions() # The following settings enables OpenMP, which is required to get best performance for CPU inference of Bert models. if use_openmp: sess_options.intra_op_num_threads=1 else: sess_options.intra_op_num_threads=psutil.cpu_count(logical=True) # Providers is optional. Only needed when you use onnxruntime-gpu for CPU inference. session = onnxruntime.InferenceSession(output_model_path, sess_options, providers=['CPUExecutionProvider']) # Use contiguous array as input could improve performance. inputs_onnx = {k_: numpy.ascontiguousarray(v_.numpy()) for k_, v_ in inputs.items()} # Warm up with one run. results = session.run(None, inputs_onnx) # Measure the latency. start = time.time() for _ in range(total_runs): results = session.run(None, inputs_onnx) end = time.time() print("ONNX Runtime cpu inference time for sequence length {} (model not optimized): {} ms".format(num_tokens, format((end - start) * 1000 / total_runs, '.2f'))) del session print("***** Verifying correctness (TensorFlow and ONNX Runtime) *****") print('start_scores are close:', numpy.allclose(results[0], start_scores.cpu(), rtol=1e-05, atol=1e-04)) print('end_scores are close:', numpy.allclose(results[1], end_scores.cpu(), rtol=1e-05, atol=1e-04)) ###Output ***** Verifying correctness (TensorFlow and ONNX Runtime) ***** WARNING:tensorflow:From <ipython-input-10-453158d8869f>:2: _EagerTensorBase.cpu (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.identity instead. start_scores are close: True end_scores are close: True ###Markdown 5. Model Optimization[ONNX Runtime BERT Model Optimization Tools](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/bert) is a set of tools for optimizing and testing BERT models. Let's try some of them on the exported models. BERT Optimization ScriptThe script **bert_model_optimization.py** can help optimize BERT model exported by PyTorch, tf2onnx or keras2onnx. Since our model is exported by keras2onnx, we shall use **--model_type bert_keras** parameter.It will also tell whether the model is fully optimized or not. If not, that means you might need change the script to fuse some new pattern of subgraph. ###Code optimized_model_path = os.path.join(output_dir, 'keras_bert_large_opt_cpu.onnx') %run bert_scripts/bert_model_optimization.py --input $output_model_path --output $optimized_model_path --model_type bert_keras --num_heads 16 --hidden_size 1024 ###Output BertOnnxModelTF.py: Fused LayerNormalization count: 49 BertOnnxModelKeras.py: Fused Gelu count:24 BertOnnxModelKeras.py: start processing embedding layer... BertOnnxModelKeras.py: Found word embedding. name:tf_bert_for_question_answering/bert/embeddings/Gather/resource:0, shape:(30522, 1024) BertOnnxModelKeras.py: Found word embedding. name:tf_bert_for_question_answering/bert/embeddings/position_embeddings/embedding_lookup/413066:0, shape:(512, 1024) BertOnnxModelKeras.py: Found segment embedding. name:tf_bert_for_question_answering/bert/embeddings/token_type_embeddings/embedding_lookup/413071:0, shape:(2, 1024) BertOnnxModelKeras.py: Create Embedding node OnnxModel.py: Graph pruned: 0 inputs, 0 outputs and 9 nodes are removed BertOnnxModelKeras.py: Fused mask BertOnnxModelKeras.py: Skip consequent Reshape count: 24 BertOnnxModel.py: Fused Reshape count:0 BertOnnxModel.py: Fused SkipLayerNormalization count: 48 BertOnnxModelKeras.py: Fused Attention count:24 BertOnnxModel.py: Fused SkipLayerNormalization with Bias count:24 BertOnnxModelKeras.py: Remove 96 Reshape nodes. OnnxModel.py: Graph pruned: 0 inputs, 0 outputs and 2160 nodes are removed BertOnnxModel.py: opset verion: 11 OnnxModel.py: Output model to ./output\keras_bert_large_opt_cpu.onnx BertOnnxModel.py: EmbedLayer=1, Attention=24, Gelu=24, LayerNormalization=48, Succesful=True bert_model_optimization.py: The output model is fully optimized. ###Markdown We run the optimized model using same inputs. The inference latency is reduced after optimization. The output result is the same as the one before optimization. ###Code session = onnxruntime.InferenceSession(optimized_model_path, sess_options) # use one run to warm up a session session.run(None, inputs_onnx) # measure the latency. start = time.time() for _ in range(total_runs): opt_results = session.run(None, inputs_onnx) end = time.time() print("ONNX Runtime cpu inference time on optimized model: {} ms".format(format((end - start) * 1000 / total_runs, '.2f'))) del session print("***** Verifying correctness (before and after optimization) *****") print('start_scores are close:', numpy.allclose(opt_results[0], start_scores.cpu(), rtol=1e-05, atol=1e-04)) print('end_scores are close:', numpy.allclose(opt_results[1], end_scores.cpu(), rtol=1e-05, atol=1e-04)) ###Output ***** Verifying correctness (before and after optimization) ***** start_scores are close: True end_scores are close: True ###Markdown Model Results Comparison ToolIf your BERT model has three inputs, a script compare_bert_results.py can be used to do a quick verification. The tool will generate some fake input data, and compare results from both the original and optimized models. If outputs are all close, it is safe to use the optimized model.Example of comparing the models before and after optimization: ###Code # The base model is exported using sequence length 26 %run ./bert_scripts/compare_bert_results.py --baseline_model $output_model_path --optimized_model $optimized_model_path --batch_size 1 --sequence_length 26 --samples 10 ###Output 100% passed for 10 random inputs given thresholds (rtol=0.001, atol=0.0001). maximum absolute difference=2.3484230041503906e-05 maximum relative difference=0.00013404049968812615 ###Markdown Performance Test ToolThis tool measures performance of BERT model inference using OnnxRuntime Python API.The following command will create 100 samples of batch_size 1 and sequence length 128 to run inference, then calculate performance numbers like average latency and throughput etc. It takes about 20 minutes to run this test. You can remove --all to reduce number of settings in the test. ###Code %run ./bert_scripts/bert_perf_test.py --model $optimized_model_path --batch_size 1 --sequence_length 128 --samples 100 --test_times 1 --inclusive --all ###Output Generating 100 samples for batch_size=1 sequence_length=128 Extra latency for converting inputs to contiguous: 0.04 ms Test summary is saved to output\perf_results_CPU_B1_S128_20200319-141051.txt ###Markdown Let's load the summary file and take a look. In this machine, the best result is achieved by OpenMP. The best setting might be difference using different hardware or model. ###Code import glob import pandas latest_result_file = max(glob.glob(os.path.join(output_dir, "perf_results_*.txt")), key=os.path.getmtime) result_data = pandas.read_table(latest_result_file, converters={'OMP_NUM_THREADS': str, 'OMP_WAIT_POLICY':str}) print(latest_result_file) print("The best setting is: {} openmp; {} contiguous array".format('use' if result_data['intra_op_num_threads'].iloc[0] == 1 else 'NO', 'use' if result_data['contiguous'].iloc[0] else 'NO')) result_data.drop(['model', 'graph_optimization_level', 'batch_size', 'sequence_length', 'test_cases', 'test_times', 'use_gpu', 'warmup'], axis=1, inplace=True) result_data.drop(['Latency_P50', 'Latency_P75', 'Latency_P90', 'Latency_P95'], axis=1, inplace=True) cols = result_data.columns.tolist() cols = cols[-4:] + cols[:-4] result_data = result_data[cols] result_data ###Output ./output\perf_results_CPU_B1_S128_20200319-141051.txt The best setting is: use openmp; NO contiguous array ###Markdown 6. Additional InfoNote that running Jupyter Notebook has slight impact on performance result since Jupyter Notebook is using system resources like CPU and memory etc. It is recommended to close Jupyter Notebook and other applications, then run the performance test tool in a console to get more accurate performance numbers.[OnnxRuntime C API](https://github.com/microsoft/onnxruntime/blob/master/docs/C_API.md) could get slightly better performance than python API. If you use C API in inference, you can use OnnxRuntime_Perf_Test.exe built from source to measure performance instead.Here is the machine configuration that generated the above results. The machine has GPU but not used in CPU inference.You might get slower or faster result based on your hardware. ###Code %run ./bert_scripts/MachineInfo.py --silent ###Output { "gpu": { "driver_version": "441.22", "devices": [ { "memory_total": 8589934592, "memory_available": 611880960, "name": "GeForce GTX 1070" } ] }, "cpu": { "brand": "Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz", "cores": 6, "logical_cores": 12, "hz": "3.1920 GHz", "l2_cache": "1536 KB", "l3_cache": "12288 KB", "processor": "Intel64 Family 6 Model 158 Stepping 10, GenuineIntel" }, "memory": { "total": 16971259904, "available": 6245142528 }, "python": "3.6.10.final.0 (64 bit)", "os": "Windows-10-10.0.18362-SP0", "onnxruntime": { "version": "1.2.0", "support_gpu": false }, "pytorch": { "version": "1.4.0+cpu", "support_gpu": false }, "tensorflow": { "version": "2.1.0", "git_version": "v2.1.0-rc2-17-ge5bf8de410", "support_gpu": true } } ###Markdown Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. Inference TensorFlow Bert Model with ONNX Runtime on CPU In this tutorial, you'll be introduced to how to load a Bert model using TensorFlow, convert it to ONNX using Keras2onnx, and inference it for high performance using ONNX Runtime. In the following sections, we are going to use the Bert model trained with Stanford Question Answering Dataset (SQuAD) dataset as an example. Bert SQuAD model is used in question answering scenarios, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. 0. Prerequisites First we need a python environment before running this notebook.You can install [AnaConda](https://www.anaconda.com/distribution/) and [Git](https://git-scm.com/downloads) and open an AnaConda console when it is done. Then you can run the following commands to create a conda environment named cpu_env:```consoleconda create -n cpu_env python=3.6conda activate cpu_envconda install -c anaconda ipykernelconda install -c conda-forge ipywidgetspython -m ipykernel install --user --name=cpu_env```Finally, launch Jupyter Notebook and you can choose cpu_env as kernel to run this notebook.Let's install [Tensorflow](https://www.tensorflow.org/install), [OnnxRuntime](https://microsoft.github.io/onnxruntime/), Keras2Onnx and other packages like the following: ###Code import sys !{sys.executable} -m pip install --quiet --upgrade tensorflow==2.1.0 !{sys.executable} -m pip install --quiet --upgrade onnxruntime # Install keras2onnx from source, since the latest package (1.6.0) does not support bert models from tensorflow 2.1 currently. !{sys.executable} -m pip install --quiet git+https://github.com/microsoft/onnxconverter-common !{sys.executable} -m pip install --quiet git+https://github.com/onnx/keras-onnx # Install other packages used in this notebook. !{sys.executable} -m pip install --quiet transformers==2.5.1 !{sys.executable} -m pip install --quiet wget psutil onnx pytz pandas py-cpuinfo py3nvml # Whether allow overwrite existing script or model. enable_overwrite = True # Number of runs to get average latency. total_runs = 100 import os import wget cache_dir = "./squad" output_dir = "./output" script_dir = './bert_scripts' for directory in [cache_dir, output_dir, script_dir]: if not os.path.exists(directory): os.makedirs(directory) # Download scripts for BERT optimization. url_prfix = "https://raw.githubusercontent.com/microsoft/onnxruntime/master/onnxruntime/python/tools/bert/" script_files = ['bert_perf_test.py', 'bert_test_data.py', 'compare_bert_results.py', 'BertOnnxModel.py', 'BertOnnxModelKeras.py', 'BertOnnxModelTF.py', 'Gpt2OnnxModel.py', 'OnnxModel.py', 'bert_model_optimization.py'] for filename in script_files: target_file = os.path.join(script_dir, filename) if enable_overwrite and os.path.exists(target_file): os.remove(target_file) if not os.path.exists(target_file): wget.download(url_prfix + filename, target_file) print("Downloaded", filename) ###Output 100% [..............................................................................] 15310 / 15310Downloaded bert_perf_test.py 100% [................................................................................] 9571 / 9571Downloaded bert_test_data.py 100% [................................................................................] 7272 / 7272Downloaded compare_bert_results.py 100% [..............................................................................] 44905 / 44905Downloaded BertOnnxModel.py 100% [..............................................................................] 21565 / 21565Downloaded BertOnnxModelKeras.py 100% [..............................................................................] 26114 / 26114Downloaded BertOnnxModelTF.py 100% [..............................................................................] 22773 / 22773Downloaded OnnxModel.py 100% [................................................................................] 7795 / 7795Downloaded bert_model_optimization.py ###Markdown 1. Load Pretrained Bert model Start to load fine-tuned model. This step take a few minutes to download the model (1.3 GB) for the first time. ###Code from transformers import (TFBertForQuestionAnswering, BertTokenizer) model_name_or_path = 'bert-large-uncased-whole-word-masking-finetuned-squad' # Load model and tokenizer tokenizer = BertTokenizer.from_pretrained(model_name_or_path, do_lower_case=True, cache_dir=cache_dir) model = TFBertForQuestionAnswering.from_pretrained(model_name_or_path, cache_dir=cache_dir) ###Output _____no_output_____ ###Markdown 2. TensorFlow InferenceUse one example to run inference using TensorFlow as baseline. ###Code import numpy question, text = "What is ONNX Runtime?", "ONNX Runtime is a performance-focused inference engine for ONNX models." inputs = tokenizer.encode_plus(question, text, add_special_tokens=True, return_tensors='tf') start_scores, end_scores = model(inputs) num_tokens = len(inputs["input_ids"][0]) all_tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) print("The answer is:", ' '.join(all_tokens[numpy.argmax(start_scores) : numpy.argmax(end_scores)+1])) import time start = time.time() for _ in range(total_runs): start_scores, end_scores = model(inputs) end = time.time() print("Tensorflow Inference time for sequence length {} = {} ms".format(num_tokens, format((end - start) * 1000 / total_runs, '.2f'))) ###Output Tensorflow Inference time for sequence length 26 = 227.06 ms
docs/_downloads/e8d0748ca1aad4cdc05491f3344aad00/cifar10_tutorial.ipynb
###Markdown 분류기(Classifier) 학습하기============================지금까지 어떻게 신경망을 정의하고, 손실을 계산하며 또 가중치를 갱신하는지에대해서 배웠습니다.이제 아마도 이런 생각을 하고 계실텐데요,데이터는 어떻게 하나요?------------------------일반적으로 이미지나 텍스트, 오디오나 비디오 데이터를 다룰 때는 표준 Python 패키지를이용하여 NumPy 배열로 불러오면 됩니다. 그 후 그 배열을 ``torch.*Tensor`` 로 변환합니다.- 이미지는 Pillow나 OpenCV 같은 패키지가 유용합니다.- 오디오를 처리할 때는 SciPy와 LibROSA가 유용하고요.- 텍스트의 경우에는 그냥 Python이나 Cython을 사용해도 되고, NLTK나 SpaCy도 유용합니다.특별히 영상 분야를 위한 ``torchvision`` 이라는 패키지가 만들어져 있는데,여기에는 Imagenet이나 CIFAR10, MNIST 등과 같이 일반적으로 사용하는 데이터셋을 위한데이터 로더(data loader), 즉 ``torchvision.datasets`` 과 이미지용 데이터 변환기(data transformer), 즉 ``torch.utils.data.DataLoader`` 가 포함되어 있습니다.이러한 기능은 엄청나게 편리하며, 매번 유사한 코드(boilerplate code)를 반복해서작성하는 것을 피할 수 있습니다.이 튜토리얼에서는 CIFAR10 데이터셋을 사용합니다. 여기에는 다음과 같은 분류들이있습니다: '비행기(airplane)', '자동차(automobile)', '새(bird)', '고양이(cat)','사슴(deer)', '개(dog)', '개구리(frog)', '말(horse)', '배(ship)', '트럭(truck)'.그리고 CIFAR10에 포함된 이미지의 크기는 3x32x32로, 이는 32x32 픽셀 크기의 이미지가3개 채널(channel)의 색상로 이뤄져 있다는 것을 뜻합니다... figure:: /_static/img/cifar10.png :alt: cifar10 cifar10이미지 분류기 학습하기----------------------------다음과 같은 단계로 진행해보겠습니다:1. ``torchvision`` 을 사용하여 CIFAR10의 학습용 / 시험용 데이터셋을 불러오고, 정규화(nomarlizing)합니다.2. 합성곱 신경망(Convolution Neural Network)을 정의합니다.3. 손실 함수를 정의합니다.4. 학습용 데이터를 사용하여 신경망을 학습합니다.5. 시험용 데이터를 사용하여 신경망을 검사합니다.1. CIFAR10을 불러오고 정규화하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^``torchvision`` 을 사용하여 매우 쉽게 CIFAR10을 불러올 수 있습니다. ###Code import torch import torchvision import torchvision.transforms as transforms ###Output _____no_output_____ ###Markdown torchvision 데이터셋의 출력(output)은 [0, 1] 범위를 갖는 PILImage 이미지입니다.이를 [-1, 1]의 범위로 정규화된 Tensor로 변환합니다.Note만약 Windows 환경에서 BrokenPipeError가 발생한다면, torch.utils.data.DataLoader()의 num_worker를 0으로 설정해보세요. ###Code transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) batch_size = 4 trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') ###Output _____no_output_____ ###Markdown 재미삼아 학습용 이미지 몇 개를 보겠습니다. ###Code import matplotlib.pyplot as plt import numpy as np # 이미지를 보여주기 위한 함수 def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # 학습용 이미지를 무작위로 가져오기 dataiter = iter(trainloader) images, labels = dataiter.next() # 이미지 보여주기 imshow(torchvision.utils.make_grid(images)) # 정답(label) 출력 print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size))) ###Output _____no_output_____ ###Markdown 2. 합성곱 신경망(Convolution Neural Network) 정의하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^이전의 신경망 섹션에서 신경망을 복사한 후, (기존에 1채널 이미지만 처리하도록정의된 것을) 3채널 이미지를 처리할 수 있도록 수정합니다. ###Code import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() ###Output _____no_output_____ ###Markdown 3. 손실 함수와 Optimizer 정의하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^교차 엔트로피 손실(Cross-Entropy loss)과 모멘텀(momentum) 값을 갖는 SGD를 사용합니다. ###Code import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) ###Output _____no_output_____ ###Markdown 4. 신경망 학습하기^^^^^^^^^^^^^^^^^^^^이제 재미있는 부분이 시작됩니다.단순히 데이터를 반복해서 신경망에 입력으로 제공하고, 최적화(Optimize)만 하면됩니다. ###Code for epoch in range(2): # 데이터셋을 수차례 반복합니다. running_loss = 0.0 for i, data in enumerate(trainloader, 0): # [inputs, labels]의 목록인 data로부터 입력을 받은 후; inputs, labels = data # 변화도(Gradient) 매개변수를 0으로 만들고 optimizer.zero_grad() # 순전파 + 역전파 + 최적화를 한 후 outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 통계를 출력합니다. running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') ###Output _____no_output_____ ###Markdown 학습한 모델을 저장해보겠습니다: ###Code PATH = './cifar_net.pth' torch.save(net.state_dict(), PATH) ###Output _____no_output_____ ###Markdown PyTorch 모델을 저장하는 자세한 방법은 `여기 `_를 참조해주세요.5. 시험용 데이터로 신경망 검사하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^지금까지 학습용 데이터셋을 2회 반복하며 신경망을 학습시켰습니다.신경망이 전혀 배운게 없을지도 모르니 확인해봅니다.신경망이 예측한 출력과 진짜 정답(Ground-truth)을 비교하는 방식으로 확인합니다.만약 예측이 맞다면 샘플을 '맞은 예측값(correct predictions)' 목록에 넣겠습니다.첫번째로 시험용 데이터를 좀 보겠습니다. ###Code dataiter = iter(testloader) images, labels = dataiter.next() # 이미지를 출력합니다. imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) ###Output _____no_output_____ ###Markdown 이제, 저장했던 모델을 불러오도록 하겠습니다 (주: 모델을 저장하고 다시 불러오는작업은 여기에서는 불필요하지만, 어떻게 하는지 설명을 위해 해보겠습니다): ###Code net = Net() net.load_state_dict(torch.load(PATH)) ###Output _____no_output_____ ###Markdown 좋습니다, 이제 이 예제들을 신경망이 어떻게 예측했는지를 보겠습니다: ###Code outputs = net(images) ###Output _____no_output_____ ###Markdown 출력은 10개 분류 각각에 대한 값으로 나타납니다. 어떤 분류에 대해서 더 높은 값이나타난다는 것은, 신경망이 그 이미지가 해당 분류에 더 가깝다고 생각한다는 것입니다.따라서, 가장 높은 값을 갖는 인덱스(index)를 뽑아보겠습니다: ###Code _, predicted = torch.max(outputs, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4))) ###Output _____no_output_____ ###Markdown 결과가 괜찮아보이네요.그럼 전체 데이터셋에 대해서는 어떻게 동작하는지 보겠습니다. ###Code correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) ###Output _____no_output_____ ###Markdown (10가지 분류 중에 하나를 무작위로) 찍었을 때의 정확도인 10% 보다는 나아보입니다.신경망이 뭔가 배우긴 한 것 같네요.그럼 어떤 것들을 더 잘 분류하고, 어떤 것들을 더 못했는지 알아보겠습니다: ###Code class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(4): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 for i in range(10): print('Accuracy of %5s : %2d %%' % ( classes[i], 100 * class_correct[i] / class_total[i])) ###Output _____no_output_____ ###Markdown 자, 이제 다음으로 무엇을 해볼까요?이러한 신경망들을 GPU에서 실행하려면 어떻게 해야 할까요?GPU에서 학습하기----------------Tensor를 GPU로 이동했던 것처럼, 신경망 또한 GPU로 옮길 수 있습니다.먼저 (CUDA를 사용할 수 있다면) 첫번째 CUDA 장치를 사용하도록 설정합니다: ###Code device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # CUDA 기기가 존재한다면, 아래 코드가 CUDA 장치를 출력합니다: print(device) ###Output _____no_output_____ ###Markdown 이 섹션의 나머지 부분에서는 ``device`` 를 CUDA 장치라고 가정하겠습니다.그리고 이 메소드(Method)들은 재귀적으로 모든 모듈의 매개변수와 버퍼를CUDA tensor로 변경합니다:.. code:: python net.to(device)또한, 각 단계에서 입력(input)과 정답(target)도 GPU로 보내야 한다는 것도 기억해야합니다:.. code:: python inputs, labels = data[0].to(device), data[1].to(device)CPU와 비교했을 때 어마어마한 속도 차이가 나지 않는 것은 왜 그럴까요?그 이유는 바로 신경망이 너무 작기 때문입니다.**연습:** 신경망의 크기를 키워보고, 얼마나 빨라지는지 확인해보세요.(첫번째 ``nn.Conv2d`` 의 2번째 인자와 두번째 ``nn.Conv2d`` 의 1번째 인자는같은 숫자여야 합니다.)**다음 목표들을 달성했습니다**:- 높은 수준에서 PyTorch의 Tensor library와 신경망을 이해합니다.- 이미지를 분류하는 작은 신경망을 학습시킵니다.여러개의 GPU에서 학습하기-------------------------모든 GPU를 활용해서 더욱 더 속도를 올리고 싶다면, :doc:`data_parallel_tutorial`을 참고하세요.이제 무엇을 해볼까요?------------------------ :doc:`Train neural nets to play video games `- `Train a state-of-the-art ResNet network on imagenet`_- `Train a face generator using Generative Adversarial Networks`_- `Train a word-level language model using Recurrent LSTM networks`_- `다른 예제들 참고하기`_- `더 많은 튜토리얼 보기`_- `포럼에서 PyTorch에 대해 얘기하기`_- `Slack에서 다른 사용자와 대화하기`_ ###Code # %%%%%%INVISIBLE_CODE_BLOCK%%%%%% del dataiter # %%%%%%INVISIBLE_CODE_BLOCK%%%%%% ###Output _____no_output_____ ###Markdown 분류기(Classifier) 학습하기============================지금까지 어떻게 신경망을 정의하고, 손실을 계산하며 또 가중치를 갱신하는지에대해서 배웠습니다.이제 아마도 이런 생각을 하고 계실텐데요,데이터는 어떻게 하나요?------------------------일반적으로 이미지나 텍스트, 오디오나 비디오 데이터를 다룰 때는 표준 Python 패키지를이용하여 NumPy 배열로 불러오면 됩니다. 그 후 그 배열을 ``torch.*Tensor`` 로 변환합니다.- 이미지는 Pillow나 OpenCV 같은 패키지가 유용합니다.- 오디오를 처리할 때는 SciPy와 LibROSA가 유용하고요.- 텍스트의 경우에는 그냥 Python이나 Cython을 사용해도 되고, NLTK나 SpaCy도 유용합니다.특별히 영상 분야를 위한 ``torchvision`` 이라는 패키지가 만들어져 있는데,여기에는 ImageNet이나 CIFAR10, MNIST 등과 같이 일반적으로 사용하는 데이터셋을 위한데이터 로더(data loader), 즉 ``torchvision.datasets`` 과 이미지용 데이터 변환기(data transformer), 즉 ``torch.utils.data.DataLoader`` 가 포함되어 있습니다.이러한 기능은 엄청나게 편리하며, 매번 유사한 코드(boilerplate code)를 반복해서작성하는 것을 피할 수 있습니다.이 튜토리얼에서는 CIFAR10 데이터셋을 사용합니다. 여기에는 다음과 같은 분류들이있습니다: '비행기(airplane)', '자동차(automobile)', '새(bird)', '고양이(cat)','사슴(deer)', '개(dog)', '개구리(frog)', '말(horse)', '배(ship)', '트럭(truck)'.그리고 CIFAR10에 포함된 이미지의 크기는 3x32x32로, 이는 32x32 픽셀 크기의 이미지가3개 채널(channel)의 색상로 이뤄져 있다는 것을 뜻합니다... figure:: /_static/img/cifar10.png :alt: cifar10 cifar10이미지 분류기 학습하기----------------------------다음과 같은 단계로 진행해보겠습니다:1. ``torchvision`` 을 사용하여 CIFAR10의 학습용 / 시험용 데이터셋을 불러오고, 정규화(nomarlizing)합니다.2. 합성곱 신경망(Convolution Neural Network)을 정의합니다.3. 손실 함수를 정의합니다.4. 학습용 데이터를 사용하여 신경망을 학습합니다.5. 시험용 데이터를 사용하여 신경망을 검사합니다.1. CIFAR10을 불러오고 정규화하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^``torchvision`` 을 사용하여 매우 쉽게 CIFAR10을 불러올 수 있습니다. ###Code import torch import torchvision import torchvision.transforms as transforms ###Output _____no_output_____ ###Markdown torchvision 데이터셋의 출력(output)은 [0, 1] 범위를 갖는 PILImage 이미지입니다.이를 [-1, 1]의 범위로 정규화된 Tensor로 변환합니다.Note만약 Windows 환경에서 BrokenPipeError가 발생한다면, torch.utils.data.DataLoader()의 num_worker를 0으로 설정해보세요. ###Code transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) batch_size = 4 trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') ###Output _____no_output_____ ###Markdown 재미삼아 학습용 이미지 몇 개를 보겠습니다. ###Code import matplotlib.pyplot as plt import numpy as np # 이미지를 보여주기 위한 함수 def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # 학습용 이미지를 무작위로 가져오기 dataiter = iter(trainloader) images, labels = dataiter.next() # 이미지 보여주기 imshow(torchvision.utils.make_grid(images)) # 정답(label) 출력 print(' '.join(f'{classes[labels[j]]:5s}' for j in range(batch_size))) ###Output _____no_output_____ ###Markdown 2. 합성곱 신경망(Convolution Neural Network) 정의하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^이전의 신경망 섹션에서 신경망을 복사한 후, (기존에 1채널 이미지만 처리하도록정의된 것을) 3채널 이미지를 처리할 수 있도록 수정합니다. ###Code import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = torch.flatten(x, 1) # 배치를 제외한 모든 차원을 평탄화(flatten) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() ###Output _____no_output_____ ###Markdown 3. 손실 함수와 Optimizer 정의하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^교차 엔트로피 손실(Cross-Entropy loss)과 모멘텀(momentum) 값을 갖는 SGD를 사용합니다. ###Code import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) ###Output _____no_output_____ ###Markdown 4. 신경망 학습하기^^^^^^^^^^^^^^^^^^^^이제 재미있는 부분이 시작됩니다.단순히 데이터를 반복해서 신경망에 입력으로 제공하고, 최적화(Optimize)만 하면됩니다. ###Code for epoch in range(2): # 데이터셋을 수차례 반복합니다. running_loss = 0.0 for i, data in enumerate(trainloader, 0): # [inputs, labels]의 목록인 data로부터 입력을 받은 후; inputs, labels = data # 변화도(Gradient) 매개변수를 0으로 만들고 optimizer.zero_grad() # 순전파 + 역전파 + 최적화를 한 후 outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 통계를 출력합니다. running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}') running_loss = 0.0 print('Finished Training') ###Output _____no_output_____ ###Markdown 학습한 모델을 저장해보겠습니다: ###Code PATH = './cifar_net.pth' torch.save(net.state_dict(), PATH) ###Output _____no_output_____ ###Markdown PyTorch 모델을 저장하는 자세한 방법은 `여기 `_를 참조해주세요.5. 시험용 데이터로 신경망 검사하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^지금까지 학습용 데이터셋을 2회 반복하며 신경망을 학습시켰습니다.신경망이 전혀 배운게 없을지도 모르니 확인해봅니다.신경망이 예측한 출력과 진짜 정답(Ground-truth)을 비교하는 방식으로 확인합니다.만약 예측이 맞다면 샘플을 '맞은 예측값(correct predictions)' 목록에 넣겠습니다.첫번째로 시험용 데이터를 좀 보겠습니다. ###Code dataiter = iter(testloader) images, labels = dataiter.next() # 이미지를 출력합니다. imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join(f'{classes[labels[j]]:5s}' for j in range(4))) ###Output _____no_output_____ ###Markdown 이제, 저장했던 모델을 불러오도록 하겠습니다 (주: 모델을 저장하고 다시 불러오는작업은 여기에서는 불필요하지만, 어떻게 하는지 설명을 위해 해보겠습니다): ###Code net = Net() net.load_state_dict(torch.load(PATH)) ###Output _____no_output_____ ###Markdown 좋습니다, 이제 이 예제들을 신경망이 어떻게 예측했는지를 보겠습니다: ###Code outputs = net(images) ###Output _____no_output_____ ###Markdown 출력은 10개 분류 각각에 대한 값으로 나타납니다. 어떤 분류에 대해서 더 높은 값이나타난다는 것은, 신경망이 그 이미지가 해당 분류에 더 가깝다고 생각한다는 것입니다.따라서, 가장 높은 값을 갖는 인덱스(index)를 뽑아보겠습니다: ###Code _, predicted = torch.max(outputs, 1) print('Predicted: ', ' '.join(f'{classes[predicted[j]]:5s}' for j in range(4))) ###Output _____no_output_____ ###Markdown 결과가 괜찮아보이네요.그럼 전체 데이터셋에 대해서는 어떻게 동작하는지 보겠습니다. ###Code correct = 0 total = 0 # 학습 중이 아니므로, 출력에 대한 변화도를 계산할 필요가 없습니다 with torch.no_grad(): for data in testloader: images, labels = data # 신경망에 이미지를 통과시켜 출력을 계산합니다 outputs = net(images) # 가장 높은 값(energy)를 갖는 분류(class)를 정답으로 선택하겠습니다 _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print(f'Accuracy of the network on the 10000 test images: {100 * correct // total} %') ###Output _____no_output_____ ###Markdown (10가지 분류 중에 하나를 무작위로) 찍었을 때의 정확도인 10% 보다는 나아보입니다.신경망이 뭔가 배우긴 한 것 같네요.그럼 어떤 것들을 더 잘 분류하고, 어떤 것들을 더 못했는지 알아보겠습니다: ###Code # 각 분류(class)에 대한 예측값 계산을 위해 준비 correct_pred = {classname: 0 for classname in classes} total_pred = {classname: 0 for classname in classes} # 변화도는 여전히 필요하지 않습니다 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predictions = torch.max(outputs, 1) # 각 분류별로 올바른 예측 수를 모읍니다 for label, prediction in zip(labels, predictions): if label == prediction: correct_pred[classes[label]] += 1 total_pred[classes[label]] += 1 # 각 분류별 정확도(accuracy)를 출력합니다 for classname, correct_count in correct_pred.items(): accuracy = 100 * float(correct_count) / total_pred[classname] print(f'Accuracy for class: {classname:5s} is {accuracy:.1f} %') ###Output _____no_output_____ ###Markdown 자, 이제 다음으로 무엇을 해볼까요?이러한 신경망들을 GPU에서 실행하려면 어떻게 해야 할까요?GPU에서 학습하기----------------Tensor를 GPU로 이동했던 것처럼, 신경망 또한 GPU로 옮길 수 있습니다.먼저 (CUDA를 사용할 수 있다면) 첫번째 CUDA 장치를 사용하도록 설정합니다: ###Code device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') # CUDA 기기가 존재한다면, 아래 코드가 CUDA 장치를 출력합니다: print(device) ###Output _____no_output_____ ###Markdown 이 섹션의 나머지 부분에서는 ``device`` 를 CUDA 장치라고 가정하겠습니다.그리고 이 메소드(Method)들은 재귀적으로 모든 모듈의 매개변수와 버퍼를CUDA tensor로 변경합니다:.. code:: python net.to(device)또한, 각 단계에서 입력(input)과 정답(target)도 GPU로 보내야 한다는 것도 기억해야합니다:.. code:: python inputs, labels = data[0].to(device), data[1].to(device)CPU와 비교했을 때 어마어마한 속도 차이가 나지 않는 것은 왜 그럴까요?그 이유는 바로 신경망이 너무 작기 때문입니다.**연습:** 신경망의 크기를 키워보고, 얼마나 빨라지는지 확인해보세요.(첫번째 ``nn.Conv2d`` 의 2번째 인자와 두번째 ``nn.Conv2d`` 의 1번째 인자는같은 숫자여야 합니다.)**다음 목표들을 달성했습니다**:- 높은 수준에서 PyTorch의 Tensor library와 신경망을 이해합니다.- 이미지를 분류하는 작은 신경망을 학습시킵니다.여러개의 GPU에서 학습하기-------------------------모든 GPU를 활용해서 더욱 더 속도를 올리고 싶다면, :doc:`data_parallel_tutorial`을 참고하세요.이제 무엇을 해볼까요?------------------------ :doc:`비디오 게임을 할 수 있는 신경망 학습시키기 `- `imagenet으로 최첨단(state-of-the-art) ResNet 신경망 학습시키기`_- `적대적 생성 신경망으로 얼굴 생성기 학습시키기`_- `순환 LSTM 네트워크를 사용해 단어 단위 언어 모델 학습시키기`_- `다른 예제들 참고하기`_- `더 많은 튜토리얼 보기`_- `포럼에서 PyTorch에 대해 얘기하기`_- `Slack에서 다른 사용자와 대화하기`_ ###Code # %%%%%%INVISIBLE_CODE_BLOCK%%%%%% del dataiter # %%%%%%INVISIBLE_CODE_BLOCK%%%%%% ###Output _____no_output_____ ###Markdown 분류기(Classifier) 학습하기============================지금까지 어떻게 신경망을 정의하고, 손실을 계산하며 또 가중치를 갱신하는지에대해서 배웠습니다.이제 아마도 이런 생각을 하고 계실텐데요,데이터는 어떻게 하나요?------------------------일반적으로 이미지나 텍스트, 오디오나 비디오 데이터를 다룰 때는 표준 Python 패키지를이용하여 NumPy 배열로 불러오면 됩니다. 그 후 그 배열을 ``torch.*Tensor`` 로 변환합니다.- 이미지는 Pillow나 OpenCV 같은 패키지가 유용합니다.- 오디오를 처리할 때는 SciPy와 LibROSA가 유용하고요.- 텍스트의 경우에는 그냥 Python이나 Cython을 사용해도 되고, NLTK나 SpaCy도 유용합니다.특별히 영상 분야를 위한 ``torchvision`` 이라는 패키지가 만들어져 있는데,여기에는 ImageNet이나 CIFAR10, MNIST 등과 같이 일반적으로 사용하는 데이터셋을 위한데이터 로더(data loader), 즉 ``torchvision.datasets`` 과 이미지용 데이터 변환기(data transformer), 즉 ``torch.utils.data.DataLoader`` 가 포함되어 있습니다.이러한 기능은 엄청나게 편리하며, 매번 유사한 코드(boilerplate code)를 반복해서작성하는 것을 피할 수 있습니다.이 튜토리얼에서는 CIFAR10 데이터셋을 사용합니다. 여기에는 다음과 같은 분류들이있습니다: '비행기(airplane)', '자동차(automobile)', '새(bird)', '고양이(cat)','사슴(deer)', '개(dog)', '개구리(frog)', '말(horse)', '배(ship)', '트럭(truck)'.그리고 CIFAR10에 포함된 이미지의 크기는 3x32x32로, 이는 32x32 픽셀 크기의 이미지가3개 채널(channel)의 색상로 이뤄져 있다는 것을 뜻합니다... figure:: /_static/img/cifar10.png :alt: cifar10 cifar10이미지 분류기 학습하기----------------------------다음과 같은 단계로 진행해보겠습니다:1. ``torchvision`` 을 사용하여 CIFAR10의 학습용 / 시험용 데이터셋을 불러오고, 정규화(nomarlizing)합니다.2. 합성곱 신경망(Convolution Neural Network)을 정의합니다.3. 손실 함수를 정의합니다.4. 학습용 데이터를 사용하여 신경망을 학습합니다.5. 시험용 데이터를 사용하여 신경망을 검사합니다.1. CIFAR10을 불러오고 정규화하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^``torchvision`` 을 사용하여 매우 쉽게 CIFAR10을 불러올 수 있습니다. ###Code import torch import torchvision import torchvision.transforms as transforms ###Output _____no_output_____ ###Markdown torchvision 데이터셋의 출력(output)은 [0, 1] 범위를 갖는 PILImage 이미지입니다.이를 [-1, 1]의 범위로 정규화된 Tensor로 변환합니다.Note만약 Windows 환경에서 BrokenPipeError가 발생한다면, torch.utils.data.DataLoader()의 num_worker를 0으로 설정해보세요. ###Code transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) batch_size = 4 trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') ###Output _____no_output_____ ###Markdown 재미삼아 학습용 이미지 몇 개를 보겠습니다. ###Code import matplotlib.pyplot as plt import numpy as np # 이미지를 보여주기 위한 함수 def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # 학습용 이미지를 무작위로 가져오기 dataiter = iter(trainloader) images, labels = dataiter.next() # 이미지 보여주기 imshow(torchvision.utils.make_grid(images)) # 정답(label) 출력 print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size))) ###Output _____no_output_____ ###Markdown 2. 합성곱 신경망(Convolution Neural Network) 정의하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^이전의 신경망 섹션에서 신경망을 복사한 후, (기존에 1채널 이미지만 처리하도록정의된 것을) 3채널 이미지를 처리할 수 있도록 수정합니다. ###Code import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = torch.flatten(x, 1) # 배치를 제외한 모든 차원을 평탄화(flatten) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() ###Output _____no_output_____ ###Markdown 3. 손실 함수와 Optimizer 정의하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^교차 엔트로피 손실(Cross-Entropy loss)과 모멘텀(momentum) 값을 갖는 SGD를 사용합니다. ###Code import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) ###Output _____no_output_____ ###Markdown 4. 신경망 학습하기^^^^^^^^^^^^^^^^^^^^이제 재미있는 부분이 시작됩니다.단순히 데이터를 반복해서 신경망에 입력으로 제공하고, 최적화(Optimize)만 하면됩니다. ###Code for epoch in range(2): # 데이터셋을 수차례 반복합니다. running_loss = 0.0 for i, data in enumerate(trainloader, 0): # [inputs, labels]의 목록인 data로부터 입력을 받은 후; inputs, labels = data # 변화도(Gradient) 매개변수를 0으로 만들고 optimizer.zero_grad() # 순전파 + 역전파 + 최적화를 한 후 outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 통계를 출력합니다. running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') ###Output _____no_output_____ ###Markdown 학습한 모델을 저장해보겠습니다: ###Code PATH = './cifar_net.pth' torch.save(net.state_dict(), PATH) ###Output _____no_output_____ ###Markdown PyTorch 모델을 저장하는 자세한 방법은 `여기 `_를 참조해주세요.5. 시험용 데이터로 신경망 검사하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^지금까지 학습용 데이터셋을 2회 반복하며 신경망을 학습시켰습니다.신경망이 전혀 배운게 없을지도 모르니 확인해봅니다.신경망이 예측한 출력과 진짜 정답(Ground-truth)을 비교하는 방식으로 확인합니다.만약 예측이 맞다면 샘플을 '맞은 예측값(correct predictions)' 목록에 넣겠습니다.첫번째로 시험용 데이터를 좀 보겠습니다. ###Code dataiter = iter(testloader) images, labels = dataiter.next() # 이미지를 출력합니다. imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) ###Output _____no_output_____ ###Markdown 이제, 저장했던 모델을 불러오도록 하겠습니다 (주: 모델을 저장하고 다시 불러오는작업은 여기에서는 불필요하지만, 어떻게 하는지 설명을 위해 해보겠습니다): ###Code net = Net() net.load_state_dict(torch.load(PATH)) ###Output _____no_output_____ ###Markdown 좋습니다, 이제 이 예제들을 신경망이 어떻게 예측했는지를 보겠습니다: ###Code outputs = net(images) ###Output _____no_output_____ ###Markdown 출력은 10개 분류 각각에 대한 값으로 나타납니다. 어떤 분류에 대해서 더 높은 값이나타난다는 것은, 신경망이 그 이미지가 해당 분류에 더 가깝다고 생각한다는 것입니다.따라서, 가장 높은 값을 갖는 인덱스(index)를 뽑아보겠습니다: ###Code _, predicted = torch.max(outputs, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4))) ###Output _____no_output_____ ###Markdown 결과가 괜찮아보이네요.그럼 전체 데이터셋에 대해서는 어떻게 동작하는지 보겠습니다. ###Code correct = 0 total = 0 # 학습 중이 아니므로, 출력에 대한 변화도를 계산할 필요가 없습니다 with torch.no_grad(): for data in testloader: images, labels = data # 신경망에 이미지를 통과시켜 출력을 계산합니다 outputs = net(images) # 가장 높은 값(energy)를 갖는 분류(class)를 정답으로 선택하겠습니다 _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) ###Output _____no_output_____ ###Markdown (10가지 분류 중에 하나를 무작위로) 찍었을 때의 정확도인 10% 보다는 나아보입니다.신경망이 뭔가 배우긴 한 것 같네요.그럼 어떤 것들을 더 잘 분류하고, 어떤 것들을 더 못했는지 알아보겠습니다: ###Code # 각 분류(class)에 대한 예측값 계산을 위해 준비 correct_pred = {classname: 0 for classname in classes} total_pred = {classname: 0 for classname in classes} # 변화도는 여전히 필요하지 않습니다 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predictions = torch.max(outputs, 1) # 각 분류별로 올바른 예측 수를 모읍니다 for label, prediction in zip(labels, predictions): if label == prediction: correct_pred[classes[label]] += 1 total_pred[classes[label]] += 1 # 각 분류별 정확도(accuracy)를 출력합니다 for classname, correct_count in correct_pred.items(): accuracy = 100 * float(correct_count) / total_pred[classname] print("Accuracy for class {:5s} is: {:.1f} %".format(classname, accuracy)) ###Output _____no_output_____ ###Markdown 자, 이제 다음으로 무엇을 해볼까요?이러한 신경망들을 GPU에서 실행하려면 어떻게 해야 할까요?GPU에서 학습하기----------------Tensor를 GPU로 이동했던 것처럼, 신경망 또한 GPU로 옮길 수 있습니다.먼저 (CUDA를 사용할 수 있다면) 첫번째 CUDA 장치를 사용하도록 설정합니다: ###Code device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # CUDA 기기가 존재한다면, 아래 코드가 CUDA 장치를 출력합니다: print(device) ###Output _____no_output_____ ###Markdown 이 섹션의 나머지 부분에서는 ``device`` 를 CUDA 장치라고 가정하겠습니다.그리고 이 메소드(Method)들은 재귀적으로 모든 모듈의 매개변수와 버퍼를CUDA tensor로 변경합니다:.. code:: python net.to(device)또한, 각 단계에서 입력(input)과 정답(target)도 GPU로 보내야 한다는 것도 기억해야합니다:.. code:: python inputs, labels = data[0].to(device), data[1].to(device)CPU와 비교했을 때 어마어마한 속도 차이가 나지 않는 것은 왜 그럴까요?그 이유는 바로 신경망이 너무 작기 때문입니다.**연습:** 신경망의 크기를 키워보고, 얼마나 빨라지는지 확인해보세요.(첫번째 ``nn.Conv2d`` 의 2번째 인자와 두번째 ``nn.Conv2d`` 의 1번째 인자는같은 숫자여야 합니다.)**다음 목표들을 달성했습니다**:- 높은 수준에서 PyTorch의 Tensor library와 신경망을 이해합니다.- 이미지를 분류하는 작은 신경망을 학습시킵니다.여러개의 GPU에서 학습하기-------------------------모든 GPU를 활용해서 더욱 더 속도를 올리고 싶다면, :doc:`data_parallel_tutorial`을 참고하세요.이제 무엇을 해볼까요?------------------------ :doc:`비디오 게임을 할 수 있는 신경망 학습시키기 `- `imagenet으로 최첨단(state-of-the-art) ResNet 신경망 학습시키기`_- `적대적 생성 신경망으로 얼굴 생성기 학습시키기`_- `순환 LSTM 네트워크를 사용해 단어 단위 언어 모델 학습시키기`_- `다른 예제들 참고하기`_- `더 많은 튜토리얼 보기`_- `포럼에서 PyTorch에 대해 얘기하기`_- `Slack에서 다른 사용자와 대화하기`_ ###Code # %%%%%%INVISIBLE_CODE_BLOCK%%%%%% del dataiter # %%%%%%INVISIBLE_CODE_BLOCK%%%%%% ###Output _____no_output_____ ###Markdown 분류기(Classifier) 학습하기============================지금까지 어떻게 신경망을 정의하고, 손실을 계산하며 또 가중치를 갱신하는지에대해서 배웠습니다.이제 아마도 이런 생각을 하고 계실텐데요,데이터는 어떻게 하나요?------------------------일반적으로 이미지나 텍스트, 오디오나 비디오 데이터를 다룰 때는 표준 Python 패키지를이용하여 NumPy 배열로 불러오면 됩니다. 그 후 그 배열을 ``torch.*Tensor`` 로 변환합니다.- 이미지는 Pillow나 OpenCV 같은 패키지가 유용합니다.- 오디오를 처리할 때는 SciPy와 LibROSA가 유용하고요.- 텍스트의 경우에는 그냥 Python이나 Cython을 사용해도 되고, NLTK나 SpaCy도 유용합니다.특별히 영상 분야를 위한 ``torchvision`` 이라는 패키지가 만들어져 있는데,여기에는 Imagenet이나 CIFAR10, MNIST 등과 같이 일반적으로 사용하는 데이터셋을 위한데이터 로더(data loader), 즉 ``torchvision.datasets`` 과 이미지용 데이터 변환기(data transformer), 즉 ``torch.utils.data.DataLoader`` 가 포함되어 있습니다.이러한 기능은 엄청나게 편리하며, 매번 유사한 코드(boilerplate code)를 반복해서작성하는 것을 피할 수 있습니다.이 튜토리얼에서는 CIFAR10 데이터셋을 사용합니다. 여기에는 다음과 같은 분류들이있습니다: '비행기(airplane)', '자동차(automobile)', '새(bird)', '고양이(cat)','사슴(deer)', '개(dog)', '개구리(frog)', '말(horse)', '배(ship)', '트럭(truck)'.그리고 CIFAR10에 포함된 이미지의 크기는 3x32x32로, 이는 32x32 픽셀 크기의 이미지가3개 채널(channel)의 색상로 이뤄져 있다는 것을 뜻합니다... figure:: /_static/img/cifar10.png :alt: cifar10 cifar10이미지 분류기 학습하기----------------------------다음과 같은 단계로 진행해보겠습니다:1. ``torchvision`` 을 사용하여 CIFAR10의 학습용 / 시험용 데이터셋을 불러오고, 정규화(nomarlizing)합니다.2. 합성곱 신경망(Convolution Neural Network)을 정의합니다.3. 손실 함수를 정의합니다.4. 학습용 데이터를 사용하여 신경망을 학습합니다.5. 시험용 데이터를 사용하여 신경망을 검사합니다.1. CIFAR10을 불러오고 정규화하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^``torchvision`` 을 사용하여 매우 쉽게 CIFAR10을 불러올 수 있습니다. ###Code import torch import torchvision import torchvision.transforms as transforms ###Output _____no_output_____ ###Markdown torchvision 데이터셋의 출력(output)은 [0, 1] 범위를 갖는 PILImage 이미지입니다.이를 [-1, 1]의 범위로 정규화된 Tensor로 변환합니다.Note만약 Windows 환경에서 BrokenPipeError가 발생한다면, torch.utils.data.DataLoader()의 num_worker를 0으로 설정해보세요. ###Code transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) batch_size = 4 trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') ###Output _____no_output_____ ###Markdown 재미삼아 학습용 이미지 몇 개를 보겠습니다. ###Code import matplotlib.pyplot as plt import numpy as np # 이미지를 보여주기 위한 함수 def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # 학습용 이미지를 무작위로 가져오기 dataiter = iter(trainloader) images, labels = dataiter.next() # 이미지 보여주기 imshow(torchvision.utils.make_grid(images)) # 정답(label) 출력 print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size))) ###Output _____no_output_____ ###Markdown 2. 합성곱 신경망(Convolution Neural Network) 정의하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^이전의 신경망 섹션에서 신경망을 복사한 후, (기존에 1채널 이미지만 처리하도록정의된 것을) 3채널 이미지를 처리할 수 있도록 수정합니다. ###Code import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = torch.flatten(x, 1) # 배치를 제외한 모든 차원을 평탄화(flatten) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() ###Output _____no_output_____ ###Markdown 3. 손실 함수와 Optimizer 정의하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^교차 엔트로피 손실(Cross-Entropy loss)과 모멘텀(momentum) 값을 갖는 SGD를 사용합니다. ###Code import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) ###Output _____no_output_____ ###Markdown 4. 신경망 학습하기^^^^^^^^^^^^^^^^^^^^이제 재미있는 부분이 시작됩니다.단순히 데이터를 반복해서 신경망에 입력으로 제공하고, 최적화(Optimize)만 하면됩니다. ###Code for epoch in range(2): # 데이터셋을 수차례 반복합니다. running_loss = 0.0 for i, data in enumerate(trainloader, 0): # [inputs, labels]의 목록인 data로부터 입력을 받은 후; inputs, labels = data # 변화도(Gradient) 매개변수를 0으로 만들고 optimizer.zero_grad() # 순전파 + 역전파 + 최적화를 한 후 outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 통계를 출력합니다. running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') ###Output _____no_output_____ ###Markdown 학습한 모델을 저장해보겠습니다: ###Code PATH = './cifar_net.pth' torch.save(net.state_dict(), PATH) ###Output _____no_output_____ ###Markdown PyTorch 모델을 저장하는 자세한 방법은 `여기 `_를 참조해주세요.5. 시험용 데이터로 신경망 검사하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^지금까지 학습용 데이터셋을 2회 반복하며 신경망을 학습시켰습니다.신경망이 전혀 배운게 없을지도 모르니 확인해봅니다.신경망이 예측한 출력과 진짜 정답(Ground-truth)을 비교하는 방식으로 확인합니다.만약 예측이 맞다면 샘플을 '맞은 예측값(correct predictions)' 목록에 넣겠습니다.첫번째로 시험용 데이터를 좀 보겠습니다. ###Code dataiter = iter(testloader) images, labels = dataiter.next() # 이미지를 출력합니다. imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) ###Output _____no_output_____ ###Markdown 이제, 저장했던 모델을 불러오도록 하겠습니다 (주: 모델을 저장하고 다시 불러오는작업은 여기에서는 불필요하지만, 어떻게 하는지 설명을 위해 해보겠습니다): ###Code net = Net() net.load_state_dict(torch.load(PATH)) ###Output _____no_output_____ ###Markdown 좋습니다, 이제 이 예제들을 신경망이 어떻게 예측했는지를 보겠습니다: ###Code outputs = net(images) ###Output _____no_output_____ ###Markdown 출력은 10개 분류 각각에 대한 값으로 나타납니다. 어떤 분류에 대해서 더 높은 값이나타난다는 것은, 신경망이 그 이미지가 해당 분류에 더 가깝다고 생각한다는 것입니다.따라서, 가장 높은 값을 갖는 인덱스(index)를 뽑아보겠습니다: ###Code _, predicted = torch.max(outputs, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4))) ###Output _____no_output_____ ###Markdown 결과가 괜찮아보이네요.그럼 전체 데이터셋에 대해서는 어떻게 동작하는지 보겠습니다. ###Code correct = 0 total = 0 # 학습 중이 아니므로, 출력에 대한 변화도를 계산할 필요가 없습니다 with torch.no_grad(): for data in testloader: images, labels = data # 신경망에 이미지를 통과시켜 출력을 계산합니다 outputs = net(images) # 가장 높은 값(energy)를 갖는 분류(class)를 정답으로 선택하겠습니다 _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) ###Output _____no_output_____ ###Markdown (10가지 분류 중에 하나를 무작위로) 찍었을 때의 정확도인 10% 보다는 나아보입니다.신경망이 뭔가 배우긴 한 것 같네요.그럼 어떤 것들을 더 잘 분류하고, 어떤 것들을 더 못했는지 알아보겠습니다: ###Code # 각 분류(class)에 대한 예측값 계산을 위해 준비 correct_pred = {classname: 0 for classname in classes} total_pred = {classname: 0 for classname in classes} # 변화도는 여전히 필요하지 않습니다 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predictions = torch.max(outputs, 1) # 각 분류별로 올바른 예측 수를 모읍니다 for label, prediction in zip(labels, predictions): if label == prediction: correct_pred[classes[label]] += 1 total_pred[classes[label]] += 1 # 각 분류별 정확도(accuracy)를 출력합니다 for classname, correct_count in correct_pred.items(): accuracy = 100 * float(correct_count) / total_pred[classname] print("Accuracy for class {:5s} is: {:.1f} %".format(classname, accuracy)) ###Output _____no_output_____ ###Markdown 자, 이제 다음으로 무엇을 해볼까요?이러한 신경망들을 GPU에서 실행하려면 어떻게 해야 할까요?GPU에서 학습하기----------------Tensor를 GPU로 이동했던 것처럼, 신경망 또한 GPU로 옮길 수 있습니다.먼저 (CUDA를 사용할 수 있다면) 첫번째 CUDA 장치를 사용하도록 설정합니다: ###Code device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # CUDA 기기가 존재한다면, 아래 코드가 CUDA 장치를 출력합니다: print(device) ###Output _____no_output_____ ###Markdown 이 섹션의 나머지 부분에서는 ``device`` 를 CUDA 장치라고 가정하겠습니다.그리고 이 메소드(Method)들은 재귀적으로 모든 모듈의 매개변수와 버퍼를CUDA tensor로 변경합니다:.. code:: python net.to(device)또한, 각 단계에서 입력(input)과 정답(target)도 GPU로 보내야 한다는 것도 기억해야합니다:.. code:: python inputs, labels = data[0].to(device), data[1].to(device)CPU와 비교했을 때 어마어마한 속도 차이가 나지 않는 것은 왜 그럴까요?그 이유는 바로 신경망이 너무 작기 때문입니다.**연습:** 신경망의 크기를 키워보고, 얼마나 빨라지는지 확인해보세요.(첫번째 ``nn.Conv2d`` 의 2번째 인자와 두번째 ``nn.Conv2d`` 의 1번째 인자는같은 숫자여야 합니다.)**다음 목표들을 달성했습니다**:- 높은 수준에서 PyTorch의 Tensor library와 신경망을 이해합니다.- 이미지를 분류하는 작은 신경망을 학습시킵니다.여러개의 GPU에서 학습하기-------------------------모든 GPU를 활용해서 더욱 더 속도를 올리고 싶다면, :doc:`data_parallel_tutorial`을 참고하세요.이제 무엇을 해볼까요?------------------------ :doc:`Train neural nets to play video games `- `Train a state-of-the-art ResNet network on imagenet`_- `Train a face generator using Generative Adversarial Networks`_- `Train a word-level language model using Recurrent LSTM networks`_- `다른 예제들 참고하기`_- `더 많은 튜토리얼 보기`_- `포럼에서 PyTorch에 대해 얘기하기`_- `Slack에서 다른 사용자와 대화하기`_ ###Code # %%%%%%INVISIBLE_CODE_BLOCK%%%%%% del dataiter # %%%%%%INVISIBLE_CODE_BLOCK%%%%%% ###Output _____no_output_____ ###Markdown 분류기(Classifier) 학습하기============================지금까지 어떻게 신경망을 정의하고, 손실을 계산하며 또 가중치를 갱신하는지에대해서 배웠습니다.이제 아마도 이런 생각을 하고 계실텐데요,데이터는 어떻게 하나요?------------------------일반적으로 이미지나 텍스트, 오디오나 비디오 데이터를 다룰 때는 표준 Python 패키지를이용하여 NumPy 배열로 불러오면 됩니다. 그 후 그 배열을 ``torch.*Tensor`` 로 변환합니다.- 이미지는 Pillow나 OpenCV 같은 패키지가 유용합니다.- 오디오를 처리할 때는 SciPy와 LibROSA가 유용하고요.- 텍스트의 경우에는 그냥 Python이나 Cython을 사용해도 되고, NLTK나 SpaCy도 유용합니다.특별히 영상 분야를 위한 ``torchvision`` 이라는 패키지가 만들어져 있는데,여기에는 Imagenet이나 CIFAR10, MNIST 등과 같이 일반적으로 사용하는 데이터셋을 위한데이터 로더(data loader), 즉 ``torchvision.datasets`` 과 이미지용 데이터 변환기(data transformer), 즉 ``torch.utils.data.DataLoader`` 가 포함되어 있습니다.이러한 기능은 엄청나게 편리하며, 매번 유사한 코드(boilerplate code)를 반복해서작성하는 것을 피할 수 있습니다.이 튜토리얼에서는 CIFAR10 데이터셋을 사용합니다. 여기에는 다음과 같은 분류들이있습니다: '비행기(airplane)', '자동차(automobile)', '새(bird)', '고양이(cat)','사슴(deer)', '개(dog)', '개구리(frog)', '말(horse)', '배(ship)', '트럭(truck)'.그리고 CIFAR10에 포함된 이미지의 크기는 3x32x32로, 이는 32x32 픽셀 크기의 이미지가3개 채널(channel)의 색상로 이뤄져 있다는 것을 뜻합니다... figure:: /_static/img/cifar10.png :alt: cifar10 cifar10이미지 분류기 학습하기----------------------------다음과 같은 단계로 진행해보겠습니다:1. ``torchvision`` 을 사용하여 CIFAR10의 학습용 / 시험용 데이터셋을 불러오고, 정규화(nomarlizing)합니다.2. 합성곱 신경망(Convolution Neural Network)을 정의합니다.3. 손실 함수를 정의합니다.4. 학습용 데이터를 사용하여 신경망을 학습합니다.5. 시험용 데이터를 사용하여 신경망을 검사합니다.1. CIFAR10을 불러오고 정규화하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^``torchvision`` 을 사용하여 매우 쉽게 CIFAR10을 불러올 수 있습니다. ###Code import torch import torchvision import torchvision.transforms as transforms ###Output _____no_output_____ ###Markdown torchvision 데이터셋의 출력(output)은 [0, 1] 범위를 갖는 PILImage 이미지입니다.이를 [-1, 1]의 범위로 정규화된 Tensor로 변환합니다.Note만약 Windows 환경에서 BrokenPipeError가 발생한다면, torch.utils.data.DataLoader()의 num_worker를 0으로 설정해보세요. ###Code transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) batch_size = 4 trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') ###Output _____no_output_____ ###Markdown 재미삼아 학습용 이미지 몇 개를 보겠습니다. ###Code import matplotlib.pyplot as plt import numpy as np # 이미지를 보여주기 위한 함수 def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # 학습용 이미지를 무작위로 가져오기 dataiter = iter(trainloader) images, labels = dataiter.next() # 이미지 보여주기 imshow(torchvision.utils.make_grid(images)) # 정답(label) 출력 print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size))) ###Output _____no_output_____ ###Markdown 2. 합성곱 신경망(Convolution Neural Network) 정의하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^이전의 신경망 섹션에서 신경망을 복사한 후, (기존에 1채널 이미지만 처리하도록정의된 것을) 3채널 이미지를 처리할 수 있도록 수정합니다. ###Code import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = torch.flatten(x, 1) # 배치를 제외한 모든 차원을 평탄화(flatten) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() ###Output _____no_output_____ ###Markdown 3. 손실 함수와 Optimizer 정의하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^교차 엔트로피 손실(Cross-Entropy loss)과 모멘텀(momentum) 값을 갖는 SGD를 사용합니다. ###Code import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) ###Output _____no_output_____ ###Markdown 4. 신경망 학습하기^^^^^^^^^^^^^^^^^^^^이제 재미있는 부분이 시작됩니다.단순히 데이터를 반복해서 신경망에 입력으로 제공하고, 최적화(Optimize)만 하면됩니다. ###Code for epoch in range(2): # 데이터셋을 수차례 반복합니다. running_loss = 0.0 for i, data in enumerate(trainloader, 0): # [inputs, labels]의 목록인 data로부터 입력을 받은 후; inputs, labels = data # 변화도(Gradient) 매개변수를 0으로 만들고 optimizer.zero_grad() # 순전파 + 역전파 + 최적화를 한 후 outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 통계를 출력합니다. running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') ###Output _____no_output_____ ###Markdown 학습한 모델을 저장해보겠습니다: ###Code PATH = './cifar_net.pth' torch.save(net.state_dict(), PATH) ###Output _____no_output_____ ###Markdown PyTorch 모델을 저장하는 자세한 방법은 `여기 `_를 참조해주세요.5. 시험용 데이터로 신경망 검사하기^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^지금까지 학습용 데이터셋을 2회 반복하며 신경망을 학습시켰습니다.신경망이 전혀 배운게 없을지도 모르니 확인해봅니다.신경망이 예측한 출력과 진짜 정답(Ground-truth)을 비교하는 방식으로 확인합니다.만약 예측이 맞다면 샘플을 '맞은 예측값(correct predictions)' 목록에 넣겠습니다.첫번째로 시험용 데이터를 좀 보겠습니다. ###Code dataiter = iter(testloader) images, labels = dataiter.next() # 이미지를 출력합니다. imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) ###Output _____no_output_____ ###Markdown 이제, 저장했던 모델을 불러오도록 하겠습니다 (주: 모델을 저장하고 다시 불러오는작업은 여기에서는 불필요하지만, 어떻게 하는지 설명을 위해 해보겠습니다): ###Code net = Net() net.load_state_dict(torch.load(PATH)) ###Output _____no_output_____ ###Markdown 좋습니다, 이제 이 예제들을 신경망이 어떻게 예측했는지를 보겠습니다: ###Code outputs = net(images) ###Output _____no_output_____ ###Markdown 출력은 10개 분류 각각에 대한 값으로 나타납니다. 어떤 분류에 대해서 더 높은 값이나타난다는 것은, 신경망이 그 이미지가 해당 분류에 더 가깝다고 생각한다는 것입니다.따라서, 가장 높은 값을 갖는 인덱스(index)를 뽑아보겠습니다: ###Code _, predicted = torch.max(outputs, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4))) ###Output _____no_output_____ ###Markdown 결과가 괜찮아보이네요.그럼 전체 데이터셋에 대해서는 어떻게 동작하는지 보겠습니다. ###Code correct = 0 total = 0 # 학습 중이 아니므로, 출력에 대한 변화도를 계산할 필요가 없습니다 with torch.no_grad(): for data in testloader: images, labels = data # 신경망에 이미지를 통과시켜 출력을 계산합니다 outputs = net(images) # 가장 높은 값(energy)를 갖는 분류(class)를 정답으로 선택하겠습니다 _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) ###Output _____no_output_____ ###Markdown (10가지 분류 중에 하나를 무작위로) 찍었을 때의 정확도인 10% 보다는 나아보입니다.신경망이 뭔가 배우긴 한 것 같네요.그럼 어떤 것들을 더 잘 분류하고, 어떤 것들을 더 못했는지 알아보겠습니다: ###Code # 각 분류(class)에 대한 예측값 계산을 위해 준비 correct_pred = {classname: 0 for classname in classes} total_pred = {classname: 0 for classname in classes} # 변화도는 여전히 필요하지 않습니다 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predictions = torch.max(outputs, 1) # 각 분류별로 올바른 예측 수를 모읍니다 for label, prediction in zip(labels, predictions): if label == prediction: correct_pred[classes[label]] += 1 total_pred[classes[label]] += 1 # 각 분류별 정확도(accuracy)를 출력합니다 for classname, correct_count in correct_pred.items(): accuracy = 100 * float(correct_count) / total_pred[classname] print("Accuracy for class {:5s} is: {:.1f} %".format(classname, accuracy)) ###Output _____no_output_____ ###Markdown 자, 이제 다음으로 무엇을 해볼까요?이러한 신경망들을 GPU에서 실행하려면 어떻게 해야 할까요?GPU에서 학습하기----------------Tensor를 GPU로 이동했던 것처럼, 신경망 또한 GPU로 옮길 수 있습니다.먼저 (CUDA를 사용할 수 있다면) 첫번째 CUDA 장치를 사용하도록 설정합니다: ###Code device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # CUDA 기기가 존재한다면, 아래 코드가 CUDA 장치를 출력합니다: print(device) ###Output _____no_output_____ ###Markdown 이 섹션의 나머지 부분에서는 ``device`` 를 CUDA 장치라고 가정하겠습니다.그리고 이 메소드(Method)들은 재귀적으로 모든 모듈의 매개변수와 버퍼를CUDA tensor로 변경합니다:.. code:: python net.to(device)또한, 각 단계에서 입력(input)과 정답(target)도 GPU로 보내야 한다는 것도 기억해야합니다:.. code:: python inputs, labels = data[0].to(device), data[1].to(device)CPU와 비교했을 때 어마어마한 속도 차이가 나지 않는 것은 왜 그럴까요?그 이유는 바로 신경망이 너무 작기 때문입니다.**연습:** 신경망의 크기를 키워보고, 얼마나 빨라지는지 확인해보세요.(첫번째 ``nn.Conv2d`` 의 2번째 인자와 두번째 ``nn.Conv2d`` 의 1번째 인자는같은 숫자여야 합니다.)**다음 목표들을 달성했습니다**:- 높은 수준에서 PyTorch의 Tensor library와 신경망을 이해합니다.- 이미지를 분류하는 작은 신경망을 학습시킵니다.여러개의 GPU에서 학습하기-------------------------모든 GPU를 활용해서 더욱 더 속도를 올리고 싶다면, :doc:`data_parallel_tutorial`을 참고하세요.이제 무엇을 해볼까요?------------------------ :doc:`비디오 게임을 할 수 있는 신경망 학습시키기 `- `imagenet으로 최첨단(state-of-the-art) ResNet 신경망 학습시키기`_- `적대적 생성 신경망으로 얼굴 생성기 학습시키기`_- `순환 LSTM 네트워크를 사용해 단어 단위 언어 모델 학습시키기`_- `다른 예제들 참고하기`_- `더 많은 튜토리얼 보기`_- `포럼에서 PyTorch에 대해 얘기하기`_- `Slack에서 다른 사용자와 대화하기`_ ###Code # %%%%%%INVISIBLE_CODE_BLOCK%%%%%% del dataiter # %%%%%%INVISIBLE_CODE_BLOCK%%%%%% ###Output _____no_output_____
notebooks/helmholtz/helmholtz_combined_exterior.ipynb
###Markdown Scattering from a sphere using a combined direct formulation Background In this tutorial, we will solve the problem of scattering from the unit sphere $\Omega$ using a combined integral formulation and an incident wave defined by$$u^{\text{inc}}(\mathbf x) = \mathrm{e}^{\mathrm{i} k x}.$$where $\mathbf x = (x, y, z)$.The PDE is given by the Helmholtz equation:$$\Delta u + k^2 u = 0, \quad \text{ in } \mathbb{R}^3 \backslash \Omega,$$where $u=u^\text{s}+u^\text{inc}$ is the total acoustic field and $u^\text{s}$ satisfies the Sommerfeld radiation condition$$\frac{\partial u^\text{s}}{\partial r}-\mathrm{i}ku^\text{s}=o(r^{-1})$$for $r:=|\mathbf{x}|\rightarrow\infty$.From Green's representation formula, one can derive that$$u(\mathbf x) = u^\text{inc}-\int_{\Gamma}g(\mathbf x,\mathbf y)\frac{\partial u}{\partial\nu}(\mathbf y)\mathrm{d}\mathbf{y}.$$Here, $g(\mathbf x, \mathbf y)$ is the acoustic Green's function given by$$g(\mathbf x, \mathbf y):=\frac{\mathrm{e}^{\mathrm{i} k |\mathbf{x}-\mathbf{y}|}}{4 \pi |\mathbf{x}-\mathbf{y}|}.$$The problem has therefore been reduced to computing the normal derivative $u_\nu:=\frac{\partial u}{\partial\nu}$ on the boundary $\Gamma$. This is achieved using the following boundary integral equation formulation.$$(\tfrac12\mathsf{Id} + \mathsf{K}' - \mathrm{i} \eta \mathsf{V}) u_\nu(\mathbf{x}) = \frac{\partial u^{\text{inc}}}{\partial \nu}(\mathbf{x}) - \mathrm{i} \eta u^{\text{inc}}(\mathbf{x}), \quad \mathbf{x} \in \Gamma.$$where $\mathsf{Id}$, $\mathsf{K}'$ and $\mathsf{V}$ are identity, adjoint double layer and single layer boundary operators. More details of the derivation of this formulation and its properties can be found in the article Chandler-Wilde et al (2012). Implementation First we import the Bempp module and NumPy. ###Code import bempp.api import numpy as np ###Output _____no_output_____ ###Markdown We define the wavenumber ###Code k = 15. ###Output _____no_output_____ ###Markdown The following command creates a sphere mesh. ###Code grid = bempp.api.shapes.regular_sphere(3) ###Output _____no_output_____ ###Markdown As basis functions, we use piecewise constant functions over the elements of the mesh. The corresponding space is initialised as follows. ###Code piecewise_const_space = bempp.api.function_space(grid, "DP", 0) ###Output _____no_output_____ ###Markdown We now initialise the boundary operators.A boundary operator always takes at least three space arguments: a domain space, a range space and the test space (dual to the range). In this example we only work on the space $\mathcal{L}^2(\Gamma)$ and we can choose all spaces to be identical. ###Code identity = bempp.api.operators.boundary.sparse.identity( piecewise_const_space, piecewise_const_space, piecewise_const_space) adlp = bempp.api.operators.boundary.helmholtz.adjoint_double_layer( piecewise_const_space, piecewise_const_space, piecewise_const_space, k) slp = bempp.api.operators.boundary.helmholtz.single_layer( piecewise_const_space, piecewise_const_space, piecewise_const_space, k) ###Output _____no_output_____ ###Markdown Standard arithmetic operators can be used to create linear combinations of boundary operators. ###Code lhs = 0.5 * identity + adlp - 1j * k * slp ###Output _____no_output_____ ###Markdown We now form the right-hand side by defining a GridFunction using Python callable. ###Code @bempp.api.complex_callable def combined_data(x, n, domain_index, result): result[0] = 1j * k * np.exp(1j * k * x[0]) * (n[0]-1) grid_fun = bempp.api.GridFunction(piecewise_const_space, fun=combined_data) ###Output _____no_output_____ ###Markdown We can now use GMRES to solve the problem. ###Code from bempp.api.linalg import gmres neumann_fun, info = gmres(lhs, grid_fun, tol=1E-5) ###Output _____no_output_____ ###Markdown `gmres` returns a grid function `neumann_fun` and an integer `info`. When everything works fine info is equal to 0.At this stage, we have the surface solution of the integral equation. Now we will evaluate the solution in the domain of interest. We define the evaluation points as follows. ###Code Nx = 200 Ny = 200 xmin, xmax, ymin, ymax = [-3, 3, -3, 3] plot_grid = np.mgrid[xmin:xmax:Nx * 1j, ymin:ymax:Ny * 1j] points = np.vstack((plot_grid[0].ravel(), plot_grid[1].ravel(), np.zeros(plot_grid[0].size))) u_evaluated = np.zeros(points.shape[1], dtype=np.complex128) u_evaluated[:] = np.nan ###Output _____no_output_____ ###Markdown This will generate a grid of points in the $x$-$y$ plane.Then we create a single layer potential operator and use it to evaluate the solution at the evaluation points. The variable ``idx`` allows to compute the solution only at points located outside the unit circle of the plane. We use a single layer potential operator to evaluate the solution at the observation points. ###Code x, y, z = points idx = np.sqrt(x**2 + y**2) > 1.0 from bempp.api.operators.potential import helmholtz as helmholtz_potential slp_pot = helmholtz_potential.single_layer( piecewise_const_space, points[:, idx], k) res = np.real(np.exp(1j *k * points[0, idx]) - slp_pot.evaluate(neumann_fun)) u_evaluated[idx] = res.flat ###Output _____no_output_____ ###Markdown We now plot the slice of the domain solution. ###Code %matplotlib inline u_evaluated = u_evaluated.reshape((Nx, Ny)) from matplotlib import pyplot as plt fig = plt.figure(figsize=(10, 8)) plt.imshow(np.real(u_evaluated.T), extent=[-3, 3, -3, 3]) plt.xlabel('x') plt.ylabel('y') plt.colorbar() plt.title("Scattering from the unit sphere, solution in plane z=0") ###Output _____no_output_____ ###Markdown Scattering from a sphere using a combined direct formulation Background In this tutorial, we will solve the problem of scattering from the unit sphere $\Omega$ using a combined integral formulation and an incident wave defined by$$u^{\text{inc}}(\mathbf x) = \mathrm{e}^{\mathrm{i} k x}.$$where $\mathbf x = (x, y, z)$.The PDE is given by the Helmholtz equation:$$\Delta u + k^2 u = 0, \quad \text{ in } \mathbb{R}^3 \backslash \Omega,$$where $u=u^\text{s}+u^\text{inc}$ is the total acoustic field and $u^\text{s}$ satisfies the Sommerfeld radiation condition$$\frac{\partial u^\text{s}}{\partial r}-\mathrm{i}ku^\text{s}=o(r^{-1})$$for $r:=|\mathbf{x}|\rightarrow\infty$.From Green's representation formula, one can derive that$$u(\mathbf x) = u^\text{inc}-\int_{\Gamma}g(\mathbf x,\mathbf y)\frac{\partial u}{\partial\nu}(\mathbf y)\mathrm{d}\mathbf{y}.$$Here, $g(\mathbf x, \mathbf y)$ is the acoustic Green's function given by$$g(\mathbf x, \mathbf y):=\frac{\mathrm{e}^{\mathrm{i} k |\mathbf{x}-\mathbf{y}|}}{4 \pi |\mathbf{x}-\mathbf{y}|}.$$The problem has therefore been reduced to computing the normal derivative $u_\nu:=\frac{\partial u}{\partial\nu}$ on the boundary $\Gamma$. This is achieved using the following boundary integral equation formulation.$$(\tfrac12\mathsf{Id} + \mathsf{K}' - \mathrm{i} \eta \mathsf{V}) u_\nu(\mathbf{x}) = \frac{\partial u^{\text{inc}}}{\partial \nu}(\mathbf{x}) - \mathrm{i} \eta u^{\text{inc}}(\mathbf{x}), \quad \mathbf{x} \in \Gamma.$$where $\mathsf{Id}$, $\mathsf{K}'$ and $\mathsf{V}$ are identity, adjoint double layer and single layer boundary operators. More details of the derivation of this formulation and its properties can be found in the article Chandler-Wilde et al (2012). Implementation First we import the Bempp module and NumPy. ###Code import bempp.api import numpy as np ###Output _____no_output_____ ###Markdown We define the wavenumber ###Code k = 15. ###Output _____no_output_____ ###Markdown The following command creates a sphere mesh. ###Code grid = bempp.api.shapes.regular_sphere(3) ###Output _____no_output_____ ###Markdown As basis functions, we use piecewise constant functions over the elements of the mesh. The corresponding space is initialised as follows. ###Code piecewise_const_space = bempp.api.function_space(grid, "DP", 0) ###Output _____no_output_____ ###Markdown We now initialise the boundary operators.A boundary operator always takes at least three space arguments: a domain space, a range space and the test space (dual to the range). In this example we only work on the space $\mathcal{L}^2(\Gamma)$ and we can choose all spaces to be identical. ###Code identity = bempp.api.operators.boundary.sparse.identity( piecewise_const_space, piecewise_const_space, piecewise_const_space) adlp = bempp.api.operators.boundary.helmholtz.adjoint_double_layer( piecewise_const_space, piecewise_const_space, piecewise_const_space, k) slp = bempp.api.operators.boundary.helmholtz.single_layer( piecewise_const_space, piecewise_const_space, piecewise_const_space, k) ###Output _____no_output_____ ###Markdown Standard arithmetic operators can be used to create linear combinations of boundary operators. ###Code lhs = 0.5 * identity + adlp - 1j * k * slp ###Output _____no_output_____ ###Markdown We now form the right-hand side by defining a GridFunction using Python callable. ###Code @bempp.api.complex_callable def combined_data(x, n, domain_index, result): result[0] = 1j * k * np.exp(1j * k * x[0]) * (n[0]-1) grid_fun = bempp.api.GridFunction(piecewise_const_space, fun=combined_data) ###Output _____no_output_____ ###Markdown We can now use GMRES to solve the problem. ###Code from bempp.api.linalg import gmres neumann_fun, info = gmres(lhs, grid_fun, tol=1E-5) ###Output _____no_output_____ ###Markdown `gmres` returns a grid function `neumann_fun` and an integer `info`. When everything works fine info is equal to 0.At this stage, we have the surface solution of the integral equation. Now we will evaluate the solution in the domain of interest. We define the evaluation points as follows. ###Code Nx = 200 Ny = 200 xmin, xmax, ymin, ymax = [-3, 3, -3, 3] plot_grid = np.mgrid[xmin:xmax:Nx * 1j, ymin:ymax:Ny * 1j] points = np.vstack((plot_grid[0].ravel(), plot_grid[1].ravel(), np.zeros(plot_grid[0].size))) u_evaluated = np.zeros(points.shape[1], dtype=np.complex128) u_evaluated[:] = np.nan ###Output _____no_output_____ ###Markdown This will generate a grid of points in the $x$-$y$ plane.Then we create a single layer potential operator and use it to evaluate the solution at the evaluation points. The variable ``idx`` allows to compute the solution only at points located outside the unit circle of the plane. We use a single layer potential operator to evaluate the solution at the observation points. ###Code x, y, z = points idx = np.sqrt(x**2 + y**2) > 1.0 from bempp.api.operators.potential import helmholtz as helmholtz_potential slp_pot = helmholtz_potential.single_layer( piecewise_const_space, points[:, idx], k) res = np.real(np.exp(1j *k * points[0, idx]) - slp_pot.evaluate(neumann_fun)) u_evaluated[idx] = res.flat ###Output _____no_output_____ ###Markdown We now plot the slice of the domain solution. ###Code %matplotlib inline u_evaluated = u_evaluated.reshape((Nx, Ny)) from matplotlib import pyplot as plt fig = plt.figure(figsize=(10, 8)) plt.imshow(np.real(u_evaluated.T), extent=[-3, 3, -3, 3]) plt.xlabel('x') plt.ylabel('y') plt.colorbar() plt.title("Scattering from the unit sphere, solution in plane z=0") ###Output _____no_output_____
scikit-learn/scikit-learn-DecisionTree.ipynb
###Markdown Decision Trees http://scikit-learn.org/stable/modules/tree.html ###Code from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier iris = load_iris() X = iris.data[:, 2:] # petal length and width y = iris.target tree_clf = DecisionTreeClassifier(max_depth=2) tree_clf.fit(X, y) from sklearn.tree import export_graphviz dot_data = export_graphviz( tree_clf, out_file=None, feature_names=iris.feature_names[2:], class_names=iris.target_names, rounded=True, filled=True ) graph = graphviz.Source(dot_data) graph ###Output _____no_output_____
tests/practice/dlpn_3.6-classifying-newswires.ipynb
###Markdown Classifying newswires: a multi-class classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----In the previous section we saw how to classify vector inputs into two mutually exclusive classes using a densely-connected neural network. But what happens when you have more than two classes? In this section, we will build a network to classify Reuters newswires into 46 different mutually-exclusive topics. Since we have many classes, this problem is an instance of "multi-class classification", and since each data point should be classified into only one category, the problem is more specifically an instance of "single-label, multi-class classification". If each data point could have belonged to multiple categories (in our case, topics) then we would be facing a "multi-label, multi-class classification" problem. The Reuters datasetWe will be working with the _Reuters dataset_, a set of short newswires and their topics, published by Reuters in 1986. It's a very simple, widely used toy dataset for text classification. There are 46 different topics; some topics are more represented than others, but each topic has at least 10 examples in the training set.Like IMDB and MNIST, the Reuters dataset comes packaged as part of Keras. Let's take a look right away: ###Code from keras.datasets import reuters (train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000) ###Output Downloading data from https://s3.amazonaws.com/text-datasets/reuters.npz 2113536/2110848 [==============================] - 1s 0us/step ###Markdown Like with the IMDB dataset, the argument `num_words=10000` restricts the data to the 10,000 most frequently occurring words found in the data.We have 8,982 training examples and 2,246 test examples: ###Code len(train_data) len(test_data) ###Output _____no_output_____ ###Markdown As with the IMDB reviews, each example is a list of integers (word indices): ###Code train_data[10] ###Output _____no_output_____ ###Markdown Here's how you can decode it back to words, in case you are curious: ###Code word_index = reuters.get_word_index() reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) # Note that our indices were offset by 3 # because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown". decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]]) decoded_newswire ###Output _____no_output_____ ###Markdown The label associated with an example is an integer between 0 and 45: a topic index. ###Code train_labels[10] ###Output _____no_output_____ ###Markdown Preparing the dataWe can vectorize the data with the exact same code as in our previous example: ###Code import numpy as np def vectorize_sequences(sequences, dimension=10000): results = np.zeros((len(sequences), dimension)) for i, sequence in enumerate(sequences): results[i, sequence] = 1. return results # Our vectorized training data x_train = vectorize_sequences(train_data) # Our vectorized test data x_test = vectorize_sequences(test_data) ###Output _____no_output_____ ###Markdown To vectorize the labels, there are two possibilities: we could just cast the label list as an integer tensor, or we could use a "one-hot" encoding. One-hot encoding is a widely used format for categorical data, also called "categorical encoding". For a more detailed explanation of one-hot encoding, you can refer to Chapter 6, Section 1. In our case, one-hot encoding of our labels consists in embedding each label as an all-zero vector with a 1 in the place of the label index, e.g.: ###Code def to_one_hot(labels, dimension=46): results = np.zeros((len(labels), dimension)) for i, label in enumerate(labels): results[i, label] = 1. return results # Our vectorized training labels one_hot_train_labels = to_one_hot(train_labels) # Our vectorized test labels one_hot_test_labels = to_one_hot(test_labels) ###Output _____no_output_____ ###Markdown Note that there is a built-in way to do this in Keras, which you have already seen in action in our MNIST example: ###Code from keras.utils.np_utils import to_categorical one_hot_train_labels = to_categorical(train_labels) one_hot_test_labels = to_categorical(test_labels) ###Output _____no_output_____ ###Markdown Building our networkThis topic classification problem looks very similar to our previous movie review classification problem: in both cases, we are trying to classify short snippets of text. There is however a new constraint here: the number of output classes has gone from 2 to 46, i.e. the dimensionality of the output space is much larger. In a stack of `Dense` layers like what we were using, each layer can only access information present in the output of the previous layer. If one layer drops some information relevant to the classification problem, this information can never be recovered by later layers: each layer can potentially become an "information bottleneck". In our previous example, we were using 16-dimensional intermediate layers, but a 16-dimensional space may be too limited to learn to separate 46 different classes: such small layers may act as information bottlenecks, permanently dropping relevant information.For this reason we will use larger layers. Let's go with 64 units: ###Code from keras import models from keras import layers model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(46, activation='softmax')) ###Output /srv/venv/lib/python3.5/site-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead if d.decorator_argspec is not None), _inspect.getargspec(target)) ###Markdown There are two other things you should note about this architecture:* We are ending the network with a `Dense` layer of size 46. This means that for each input sample, our network will output a 46-dimensional vector. Each entry in this vector (each dimension) will encode a different output class.* The last layer uses a `softmax` activation. You have already seen this pattern in the MNIST example. It means that the network will output a _probability distribution_ over the 46 different output classes, i.e. for every input sample, the network will produce a 46-dimensional output vector where `output[i]` is the probability that the sample belongs to class `i`. The 46 scores will sum to 1.The best loss function to use in this case is `categorical_crossentropy`. It measures the distance between two probability distributions: in our case, between the probability distribution output by our network, and the true distribution of the labels. By minimizing the distance between these two distributions, we train our network to output something as close as possible to the true labels. ###Code model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) ###Output /srv/venv/lib/python3.5/site-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead if d.decorator_argspec is not None), _inspect.getargspec(target)) ###Markdown Validating our approachLet's set apart 1,000 samples in our training data to use as a validation set: ###Code x_val = x_train[:1000] partial_x_train = x_train[1000:] y_val = one_hot_train_labels[:1000] partial_y_train = one_hot_train_labels[1000:] ###Output _____no_output_____ ###Markdown Now let's train our network for 20 epochs: ###Code history = model.fit(partial_x_train, partial_y_train, epochs=2, batch_size=512, validation_data=(x_val, y_val)) ###Output /srv/venv/lib/python3.5/site-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead if d.decorator_argspec is not None), _inspect.getargspec(target)) ###Markdown Let's display its loss and accuracy curves: ###Code import matplotlib.pyplot as plt loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.clf() # clear figure acc = history.history['acc'] val_acc = history.history['val_acc'] plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown It seems that the network starts overfitting after 8 epochs. Let's train a new network from scratch for 8 epochs, then let's evaluate it on the test set: ###Code model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(46, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(partial_x_train, partial_y_train, epochs=2, batch_size=512, validation_data=(x_val, y_val)) results = model.evaluate(x_test, one_hot_test_labels) results ###Output _____no_output_____ ###Markdown Our approach reaches an accuracy of ~78%. With a balanced binary classification problem, the accuracy reached by a purely random classifier would be 50%, but in our case it is closer to 19%, so our results seem pretty good, at least when compared to a random baseline: ###Code import copy test_labels_copy = copy.copy(test_labels) np.random.shuffle(test_labels_copy) float(np.sum(np.array(test_labels) == np.array(test_labels_copy))) / len(test_labels) ###Output _____no_output_____ ###Markdown Generating predictions on new dataWe can verify that the `predict` method of our model instance returns a probability distribution over all 46 topics. Let's generate topic predictions for all of the test data: ###Code predictions = model.predict(x_test) ###Output _____no_output_____ ###Markdown Each entry in `predictions` is a vector of length 46: ###Code predictions[0].shape ###Output _____no_output_____ ###Markdown The coefficients in this vector sum to 1: ###Code np.sum(predictions[0]) ###Output _____no_output_____ ###Markdown The largest entry is the predicted class, i.e. the class with the highest probability: ###Code np.argmax(predictions[0]) ###Output _____no_output_____ ###Markdown A different way to handle the labels and the lossWe mentioned earlier that another way to encode the labels would be to cast them as an integer tensor, like such: ###Code y_train = np.array(train_labels) y_test = np.array(test_labels) ###Output _____no_output_____ ###Markdown The only thing it would change is the choice of the loss function. Our previous loss, `categorical_crossentropy`, expects the labels to follow a categorical encoding. With integer labels, we should use `sparse_categorical_crossentropy`: ###Code model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc']) ###Output /srv/venv/lib/python3.5/site-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead if d.decorator_argspec is not None), _inspect.getargspec(target)) ###Markdown This new loss function is still mathematically the same as `categorical_crossentropy`; it just has a different interface. On the importance of having sufficiently large intermediate layersWe mentioned earlier that since our final outputs were 46-dimensional, we should avoid intermediate layers with much less than 46 hidden units. Now let's try to see what happens when we introduce an information bottleneck by having intermediate layers significantly less than 46-dimensional, e.g. 4-dimensional. ###Code model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(4, activation='relu')) model.add(layers.Dense(46, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(partial_x_train, partial_y_train, epochs=2, batch_size=128, validation_data=(x_val, y_val)) ###Output /srv/venv/lib/python3.5/site-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead if d.decorator_argspec is not None), _inspect.getargspec(target))
ml_src/old_code/clothing-attribute-analysis.ipynb
###Markdown Data Loaded. Build the Model ###Code from keras.applications.vgg16 import VGG16 from keras.applications.resnet50 import ResNet50 from keras.models import Model, Sequential, Input from keras.layers import Dense, Conv2D, GlobalAveragePooling2D, MaxPooling2D from keras.layers import BatchNormalization, Dropout, Flatten, Dense, Activation from keras import backend as K from keras.preprocessing import image from keras.optimizers import Adam, RMSprop from imagenet_dl_models.keras.vgg16 import preprocess_input_vgg train_datagen = image.ImageDataGenerator( preprocessing_function=preprocess_input_vgg, # zca_whitening=True, # apply ZCA whitening rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180) width_shift_range=0.1, # randomly shift images horizontally (fraction of total width) height_shift_range=0.1, # randomly shift images vertically (fraction of total height) horizontal_flip=True, # randomly flip images vertical_flip=False # randomly flip images ) valid_datagen = image.ImageDataGenerator(preprocessing_function=preprocess_input_vgg) test_datagen = image.ImageDataGenerator(preprocessing_function=preprocess_input_vgg) # batch_size = 32 # epochs=10 # train_steps_per_epoch = len(X_train) // batch_size # valid_steps_per_epoch = len(X_valid) // batch_size # train_genarator = train_datagen.flow(X_train, Y_train, batch_size=batch_size, shuffle=True) # valid_generator = valid_datagen.flow(X_valid, Y_valid, batch_size=batch_size, shuffle=False) # test_generator = test_datagen.flow(X_test, Y_test, batch_size=batch_size, shuffle=False) vgg_conv_model = VGG16(weights='imagenet', include_top=False, input_shape=(400, 266, 3)) def add_bn_layers(inp_layer, dropout_p, output_dims=3, activation="softmax"): print(inp_layer) inp_layer = MaxPooling2D()(inp_layer) inp_layer = BatchNormalization(axis=1)(inp_layer) inp_layer = Flatten()(inp_layer) # Add FC Layer 1 # dropout_1 = Dropout(dropout_p/4)(bn_1) dense_1 = Dense(1024)(inp_layer) dense_1 = BatchNormalization()(dense_1) dense_1 = Activation("relu")(dense_1) dense_2 = Dense(512)(dense_1) dense_2 = BatchNormalization()(dense_2) dense_2 = Activation("relu")(dense_2) # # Add FC Layer 2 # bn_2 = BatchNormalization()(dense_1) # dropout_2 = Dropout(dropout_p/2)(bn_2) # dense_2 = Dense(512, activation="relu")(dropout_2) # Add Final Output Layer # bn_3 = BatchNormalization()(dense_2) dropout_3 = Dropout(dropout_p)(dense_2) output_layer = Dense(output_dims, activation=activation)(dropout_3) return output_layer for layer in vgg_conv_model.layers: layer.trainable = False nb_output_dims = len(target_columns) vgg_last_conv_layer = vgg_conv_model.get_layer("block5_conv3") output_layer = add_bn_layers(vgg_last_conv_layer.output, dropout_p=0.9, output_dims=nb_output_dims) vgg_conv_model = Model(inputs=vgg_conv_model.inputs, outputs=output_layer) vgg_conv_model.summary() y_train train_generator = train_datagen.flow(X_train, y_train, batch_size=batch_size, shuffle=True) valid_generator = valid_datagen.flow(X_valid, y_valid, batch_size=len(X_valid), shuffle=False) # test_generator = test_datagen.flow(X_test, Y_test, batch_size=len(X_test), shuffle=False) for epoch, (X, y) in enumerate(train_generator): X_vgg_output = vgg_conv_model.predict(X) hist = model.fit(X_vgg_output, y, batch_size=len(X), epochs=1, verbose=0) if epoch > nb_epochs: break if epoch % 5 == 0: X_vgg_output = vgg_conv_model.predict_generator(valid_generator, steps=1) valid_result = model.evaluate_generator(X_vgg_output, y_valid) print(epoch, valid_result) # resnet_model = ResNet50(include_top=False, weights="imagenet", input_shape=(400, 266, 3)) def basic_cnn_model(input_shape, num_classes=3): model = Sequential() model.add(Conv2D(32, (3, 3), padding='same', input_shape=input_shape)) model.add(Activation('relu')) model.add(Conv2D(32, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Conv2D(64, (3, 3), padding='same')) model.add(Activation('relu')) model.add(Conv2D(64, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(512)) model.add(Activation('relu')) model.add(Dropout(0)) model.add(Dense(num_classes)) model.add(Activation('softmax')) # Let's train the model using RMSprop model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) return model m1 = basic_cnn_model((400, 266, 3), num_classes=7) def get_bn_layers(p, input_shape, output_dims=3, optimizer="rmsprop", metrics=["accuracy"]): model = Sequential([ MaxPooling2D(input_shape=input_shape), # BatchNormalization(axis=1), # Dropout(p/4), Flatten(), # Dense(512, activation='relu'), # BatchNormalization(), # Dropout(p/2), # Dense(512, activation='relu'), # BatchNormalization(), # Dropout(p), Dense(output_dims) ]) if output_dims == 1: model.add(Activation("sigmoid")) loss = "binary_crossentropy" else: model.add(Activation("softmax")) loss = "categorical_crossentropy" model.compile(optimizer=optimizer, loss=loss, metrics=metrics) return model vgg_last_conv_layer = vgg_conv_model.get_layer("block5_conv3") vgg_conv_model = Model(inputs=vgg_conv_model.inputs, outputs=vgg_last_conv_layer.output) # sleve_length_layer = add_bn_layers(vgg_last_conv_layer_output, # dropout_p=0.9, # output_dims=7) vgg_conv_model.summary() TARGET_CLASSES["category_GT"] = 7 TARGET_CLASSES["neckline_GT"] = 3 TARGET_CLASSES["sleevelength_GT"] = 3 model_input_shape vgg_last_conv_layer.output.shape[1:] model_input_shape # Create a Tuple of Model Shape model_input_shape = [int(item) for item in vgg_last_conv_layer.output.shape[1:]] # model_input_shape = (25, 16, 512) models = {} p = 0 for target, count in TARGET_CLASSES.items(): if target in ["sleevelength_GT", "category_GT"]: models[target] = get_bn_layers(p=p, input_shape=model_input_shape, output_dims=count) models["sleevelength_GT"].summary() LABELS_FILE = "data/labels.csv" TRAIN_IMAGES_FOLDER = "data/train/" VALID_IMAGES_FOLDER = "data/valid/" TEST_IMAGES_FOLDER = "data/test/" X_vgg_output.shape y_train X_vgg_output.shape y.shape ## Load Data and Train Model for target, model in models.items(): batch_size = 256 nb_epochs = 1 # Convert y_vect values to one hot vector # Traning Data X_train, y_train = get_data(TRAIN_IMAGES_FOLDER, LABELS_FILE, target) X_valid, y_valid = get_data(VALID_IMAGES_FOLDER, LABELS_FILE, target) nb_classes = TARGET_CLASSES[target] if nb_classes > 1: y_train = keras.utils.to_categorical(y_train, nb_classes) y_valid = keras.utils.to_categorical(y_valid, nb_classes) # # Test Data # X_test, y_test_vect = get_data(TEST_IMAGES_FOLDER, LABELS_FILE, target) # Y_test = keras.utils.to_categorical(y_test_vect, nb_classes) train_steps_per_epoch = len(X_train) // batch_size valid_steps_per_epoch = len(X_valid) // batch_size train_generator = train_datagen.flow(X_train, y_train, batch_size=batch_size, shuffle=True) valid_generator = valid_datagen.flow(X_valid, y_valid, batch_size=len(X_valid), shuffle=False) # test_generator = test_datagen.flow(X_test, Y_test, batch_size=len(X_test), shuffle=False) for epoch, (X, y) in enumerate(train_generator): X_vgg_output = vgg_conv_model.predict(X) hist = model.fit(X_vgg_output, y, batch_size=len(X), epochs=1, verbose=0) if epoch > nb_epochs: break if epoch % 5 == 0: X_vgg_output = vgg_conv_model.predict_generator(valid_generator, steps=1) valid_result = model.evaluate_generator(X_vgg_output, y_valid) print(epoch, valid_result) # # fits the model on batches with real-time data augmentation: # hist1 = models[target].fit(X_vgg_output, # steps_per_epoch=train_steps_per_epoch, # epochs=1, # validation_data=valid_generator, # validation_steps=valid_steps_per_epoch) vgg_conv_model.fit() _ = models["sleevelength_GT"].fit(X_vgg_output, Y_train) print("Train: ", (X_train.shape, Y_train.shape)) print("Validation: ", (X_valid.shape, Y_valid.shape)) print("Test: ", (X_test.shape, Y_test.shape)) # model = Model(inputs=vgg_conv_model.inputs, outputs=sleve_length_layer) # fits the model on batches with real-time data augmentation: hist1 = m1.fit_generator(train_genarator, steps_per_epoch=train_steps_per_epoch, epochs=1, validation_data=valid_generator, validation_steps=valid_steps_per_epoch) model.optimizer.lr = 0.01 # fits the model on batches with real-time data augmentation: hist2 = m1.fit_generator(train_genarator, steps_per_epoch=train_steps_per_epoch, epochs=3, validation_data=valid_generator, validation_steps=valid_steps_per_epoch) model.optimizer.lr = 1e-4 # fits the model on batches with real-time data augmentation: hist3 = m1.fit_generator(train_genarator, steps_per_epoch=train_steps_per_epoch, epochs=10, validation_data=valid_generator, validation_steps=valid_steps_per_epoch) # fits the model on batches with real-time data augmentation: hist1 = model.fit_generator(train_genarator, steps_per_epoch=train_steps_per_epoch, epochs=1, validation_data=valid_generator, validation_steps=valid_steps_per_epoch) model.optimizer.lr = 0.01 # fits the model on batches with real-time data augmentation: hist2 = model.fit_generator(train_genarator, steps_per_epoch=train_steps_per_epoch, epochs=3, validation_data=valid_generator, validation_steps=valid_steps_per_epoch) model.optimizer.lr = 1e-4 # fits the model on batches with real-time data augmentation: hist3 = model.fit_generator(train_genarator, steps_per_epoch=train_steps_per_epoch, epochs=10, validation_data=valid_generator, validation_steps=valid_steps_per_epoch) train_loss = hist1.history["loss"] + hist2.history["loss"] + hist3.history["loss"] val_loss = hist1.history["val_loss"] + hist2.history["val_loss"] + hist3.history["val_loss"] train_acc = hist1.history["acc"] + hist2.history["acc"] + hist3.history["acc"] val_acc = hist1.history["val_acc"] + hist2.history["val_acc"] + hist3.history["val_acc"] hist1.history plt.plot(range(len(train_acc)), train_acc, label="Training Accuracy") ax = plt.plot(range(len(train_acc)), val_acc, label="Validation Accuracy") plt.legend(loc="best") plt.xlim([0, 20]) plt.ylim([0.5, 1]) plt.plot(range(len(train_loss)), train_loss, label="Training Loss") ax = plt.plot(range(len(train_loss)), val_loss, label="Validation Loss") plt.legend(loc="best") plt.xlim([0, 20]) plt.ylim([0, 1.5]) y_pred = model.predict(X_test) y_pred_val = np.argmax(y_pred, axis=1) y_true_val = np.argmax(Y_test, axis=1) (y_true_val == y_pred_val).sum() len(y_pred_val) from sklearn.metrics import confusion_matrix confusion_matrix(y_true_val, y_pred_val) 74/112 !mkdir models model.evaluate_generator(valid_generator, steps=len(X_valid)//valid_generator.batch_size) model.evaluate_generator(test_generator, steps=len(X_test)//test_generator.batch_size) ###Output _____no_output_____ ###Markdown Save Model ###Code !ls data/ model.save_weights("weights/slevelength_1.h5") ###Output _____no_output_____ ###Markdown Test Model ###Code from PIL import Image import requests from io import BytesIO import numpy as np from keras.applications.vgg16 import preprocess_input url = "https://developer.clarifai.com/static/images/model-samples/apparel-002.jpg" response = requests.get(url) img = Image.open(BytesIO(response.content)) img_array = np.array(img) img_array = scipy.misc.imresize(img, (400, 266, 3)).astype("float") X = np.expand_dims(img_array, axis=0) X = preprocess_input(X) import numpy as np from keras.applications.vgg16 import preprocess_input # img = scipy.misc.imread("data/train/000002.jpg", mode="RGB") img_array = scipy.misc.imresize(img, (400, 266, 3)).astype(float) X = np.expand_dims(img_array, axis=0) X = preprocess_input(X) # img = scipy.misc.imread("data/train/000002.jpg", mode="RGB") # img_array = scipy.misc.imresize(img, (400, 266, 3)) # img_array[:,:,0] -= 103.939 # img_array[:,:,1] -= 116.779 # img_array[:,:,2] -= 123.68 # img_array = np.expand_dims(img_array, 0) model.predict(X) ###Output _____no_output_____
Code/Notebooks/EDA.ipynb
###Markdown Pattern Recognition Lab Style Classification in Posters Exploratory Data Analysis (EDA) Tim Löhr > - Style classification using WikiArt - Crawl WikiArt (images+styles) - Train DL-based network w. WikiArt data - Apply to poster data**Table of Contents** `(clickable)`- [1.0 - Data Loading](1)- [2.0 - Generate Pandas DataFrame](2)- [3.0 - Generate Pandas DataFrame](3)- [4.0 - Data Analysis](4) - [4.1 - Artworks distribution](4.1) - [4.2 - Year distribution](4.2) - [4.3 - Style distribution](4.3) - [4.4 - Style distribution](4.4)- [5.0 - Conclusion](5) ###Code import numpy as np import pandas as pd import seaborn as sns import json import os import sys import requests import matplotlib.pyplot as plt import warnings warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown (1) Data Loading ###Code meta_data_path = os.path.join("..","..", "Data", "wikiart", "meta") root_dir = os.path.join("..", "..", "Data", "wikiart", "images") csv_path = os.path.join("..", "..", "Data", "wikiart.csv") meta_dictionary = {} for artist in os.listdir(meta_data_path): artist_path = os.path.join(meta_data_path, artist) try: with open(artist_path, "rb") as file: json_file = json.load(file) artist = artist.split('.')[0] meta_dictionary[artist] = {} for i, artwork in enumerate(json_file): meta_dictionary[artist][i] = {} for j, feature in enumerate(artwork): if feature == "image": artwork[feature] = artwork[feature][:-10] meta_dictionary[artist][i][feature] = artwork[feature] except: pass print(f"Number of Artists: {len(meta_dictionary)}") ###Output Number of Artists: 734 ###Markdown (2) Generate Pandas DataFrame ###Code important_features = ['contentId', 'artistName', 'artistUrl','yearAsString', 'style', 'genre', 'tags', 'url'] try: df = pd.read_csv(csv_path) print(f"DataFrame successfully loaded. Dataframe Shape: {df.shape}") except: # prepare unique features for the pandas df features = [] for artist in meta_dictionary: for art in meta_dictionary[artist]: feats = list(meta_dictionary[artist][art].keys()) features = np.concatenate([features, feats]) unique_features = np.unique(features) print(f"Unique Features: {len(unique_features)}") # create pandas df based on that df = pd.DataFrame(columns=unique_features) # fill in features if it exist for artist in meta_dictionary: for art in meta_dictionary[artist]: feats = meta_dictionary[artist][art] df = df.append(feats, ignore_index=True) # Preprocessing df = df.replace('', np.nan) df = df.replace('None','') cleaned_df = pd.concat([df[important_features], df['style'].str.split(',', expand=True).rename(columns={0: "Style_1", 1: 'Style_2'})], axis=1).drop(columns=['style']).copy() cleaned_df = cleaned_df.dropna(subset=["Style_1", "genre"]).copy() cleaned_df['Style_1'] = cleaned_df['Style_1'].astype(str).str.strip() cleaned_df['Style_2'] = cleaned_df['Style_2'].astype(str).str.strip() cleaned_df.loc[cleaned_df['Style_1'].str.contains('Neo-Pop Art'), "Style_1"] = "Pop Art" cleaned_df.loc[cleaned_df['Style_2'].str.contains('Neo-Pop Art'), "Style_2"] = "Pop Art" cleaned_df['genre'] = cleaned_df['genre'].apply(lambda x: x.split(',')[0]) cleaned_df = cleaned_df.drop("Style_2", axis=1).rename(columns={"Style_1": "style"}).copy() # for dataloader cleaned_df['path'] = cleaned_df.apply(lambda x: str(x['artistUrl']) + "/" + str(x['yearAsString']) + "/" + str(x['contentId']) + ".jpg", axis=1) cleaned_df.drop(["contentId", "artistUrl", "url", "yearAsString"], axis=1, inplace=True) print(f"Dataset successfully generated: DataFrame Shape: {df.shape}") ###Output _____no_output_____ ###Markdown (3) Check if image exists ###Code dropped_df = cleaned_df.copy() dropped_df = dropped_df.reset_index().drop('index', axis=1) print(f"DataFrame shape before drop: {cleaned_df.shape}") for i, filename in enumerate(cleaned_df['path']): path = os.path.join(root_dir, filename) if not os.path.isfile(path): dropped_df.drop(index=i, inplace=True) print(f"DataFrame shape after drop: {dropped_df.shape}") df = dropped_df.copy() df.to_csv(csv_path, index=False) print("DataFrame successfully saved.") ###Output DataFrame successfully saved. ###Markdown (4) Data Analysis ###Code important_features = df.columns for feature in important_features: print(f"{feature} - Number of unique features: {len(df[feature].unique())}") images_per_artist = [] for artist in meta_dictionary: images_per_artist.append(len(artist)) ###Output _____no_output_____ ###Markdown (4.1) Artworks distribution ###Code plt.hist(images_per_artist) plt.xlabel("Number of Artworks per Artist") plt.ylabel("Count") plt.title("Distribution of Artworks per Artist") plt.show() ###Output _____no_output_____ ###Markdown (4.2) Year distribution ###Code data = df['path'].apply(lambda x: x.split("/")[1]).dropna().astype(int) binwidth = 10 plt.figure(figsize=(15,5)) plt.hist(data, bins=range(min(data), max(data) + binwidth, binwidth)) plt.xticks(np.arange(min(data), max(data)+1, 50)) plt.xlim(1400, 2020) plt.xlabel("Year") plt.ylabel("Count") plt.title("Distribution of Artworks by Year starting from the 15th Century") plt.show() ###Output _____no_output_____ ###Markdown (4.3) Style distribution ###Code style_count_df = df['style'].value_counts() data = style_count_df binwidth = 50 plt.figure(figsize=(15,5)) plt.hist(data, bins=range(min(data), max(data) + binwidth, binwidth)) plt.xticks(np.arange(min(data)-1, max(data), 50)) plt.xlim(1, 500) plt.xlabel("Amount of the style") plt.ylabel("Count") plt.title("Distribution of styles [0 to 500] different styles") plt.show() data = style_count_df binwidth = 50 plt.figure(figsize=(15,5)) plt.hist(data, bins=range(min(data), max(data) + binwidth, binwidth)) plt.xticks(np.arange(min(data)-1, max(data), 100)) plt.xlim(500, 3000) plt.ylim(0, 5) plt.xlabel("Amount of the style") plt.ylabel("Count") plt.title("Distribution of styles [500 to 2000] different styles") plt.show() ###Output _____no_output_____ ###Markdown (4.4) Genre distribution ###Code genre_count_df = df['genre'].value_counts() data = genre_count_df binwidth = 50 plt.figure(figsize=(15,5)) plt.hist(data, bins=range(min(data), max(data) + binwidth, binwidth)) plt.xticks(np.arange(min(data)-1, max(data), 50)) plt.xlim(1, 500) plt.xlabel("Amount of genres") plt.ylabel("Count") plt.title("Distribution of genres [0 to 500] different styles") plt.show() binwidth = 50 plt.figure(figsize=(15,5)) plt.hist(data, bins=range(min(data), max(data) + binwidth, binwidth)) plt.xticks(np.arange(min(data)-1, max(data), 100)) plt.xlim(500, 2500) plt.ylim(0, 5) plt.xlabel("Amount of genres") plt.ylabel("Count") plt.title("Distribution of genres [500 to 2000] different styles") plt.show() style_df = df[df['style'].str.contains('Pop')] print(f"Style_1 - Pop Art pictures in total: {len(style_df['style'])}") print(f"Style_1 - Pop Art variations in total: {len(np.unique(list(style_df['style'])))}") print() print(f"Poster occurence in genre: {len(df[df['genre'].str.contains('poster')])}") ###Output Style_1 - Pop Art pictures in total: 407 Style_1 - Pop Art variations in total: 1 Poster occurence in genre: 77
tutorials/4 Analysis/A. Core - EM and quantization/4.01 Capacitance and LOM.ipynb
###Markdown Capacitance matrix and LOM analysis PrerequisiteYou need to have a working local installation of Ansys. 1. Create the design in Metal ###Code %reload_ext autoreload %autoreload 2 import qiskit_metal as metal from qiskit_metal import designs, draw from qiskit_metal import MetalGUI, Dict, Headings design = designs.DesignPlanar() gui = MetalGUI(design) from qiskit_metal.qlibrary.qubits.transmon_pocket import TransmonPocket from qiskit_metal.qlibrary.tlines.meandered import RouteMeander design.variables['cpw_width'] = '15 um' design.variables['cpw_gap'] = '9 um' ###Output _____no_output_____ ###Markdown In this example, the design consists of 4 qubits and 4 CPWs ###Code # Allow running the same cell here multiple times to overwrite changes design.overwrite_enabled = True ## Custom options for all the transmons options = dict( # Some options we want to modify from the defaults # (see below for defaults) pad_width = '425 um', pocket_height = '650um', # Adding 4 connectors (see below for defaults) connection_pads=dict( readout = dict(loc_W=+1,loc_H=-1, pad_width='200um'), bus1 = dict(loc_W=-1,loc_H=+1, pad_height='30um'), bus2 = dict(loc_W=-1,loc_H=-1, pad_height='50um') ) ) ## Create 4 transmons q1 = TransmonPocket(design, 'Q1', options = dict( pos_x='+2.42251mm', pos_y='+0.0mm', **options)) q2 = TransmonPocket(design, 'Q2', options = dict( pos_x='+0.0mm', pos_y='-0.95mm', orientation = '270', **options)) q3 = TransmonPocket(design, 'Q3', options = dict( pos_x='-2.42251mm', pos_y='+0.0mm', orientation = '180', **options)) q4 = TransmonPocket(design, 'Q4', options = dict( pos_x='+0.0mm', pos_y='+0.95mm', orientation = '90', **options)) RouteMeander.get_template_options(design) options = Dict( lead=Dict( start_straight='0.2mm', end_straight='0.2mm'), trace_gap='9um', trace_width='15um') def connect(component_name: str, component1: str, pin1: str, component2: str, pin2: str, length: str, asymmetry='0 um', flip=False, fillet='90um'): """Connect two pins with a CPW.""" myoptions = Dict( fillet=fillet, hfss_wire_bonds = True, pin_inputs=Dict( start_pin=Dict( component=component1, pin=pin1), end_pin=Dict( component=component2, pin=pin2)), total_length=length) myoptions.update(options) myoptions.meander.asymmetry = asymmetry myoptions.meander.lead_direction_inverted = 'true' if flip else 'false' return RouteMeander(design, component_name, myoptions) asym = 140 cpw1 = connect('cpw1', 'Q1', 'bus2', 'Q2', 'bus1', '6.0 mm', f'+{asym}um') cpw2 = connect('cpw2', 'Q3', 'bus1', 'Q2', 'bus2', '6.1 mm', f'-{asym}um', flip=True) cpw3 = connect('cpw3', 'Q3', 'bus2', 'Q4', 'bus1', '6.0 mm', f'+{asym}um') cpw4 = connect('cpw4', 'Q1', 'bus1', 'Q4', 'bus2', '6.1 mm', f'-{asym}um', flip=True) gui.rebuild() gui.autoscale() ###Output _____no_output_____ ###Markdown 2. Capacitance Analysis and LOM derivation using the analysis package - most users Capacitance AnalysisSelect the analysis you intend to run from the `qiskit_metal.analyses` collection.Select the design to analyze and the tool to use for any external simulation ###Code from qiskit_metal.analyses.quantization import LOManalysis c1 = LOManalysis(design, "q3d") ###Output _____no_output_____ ###Markdown (optional) You can review and update the Analysis default setup following the examples in the next two cells. ###Code c1.sim.setup # example: update single setting c1.sim.setup.max_passes = 6 # example: update multiple settings c1.sim.setup_update(solution_order = 'Medium', auto_increase_solution_order = 'False') c1.sim.setup ###Output _____no_output_____ ###Markdown Analyze a single qubit with 2 endcaps using the default (or edited) analysis setup. Then show the capacitance matrix (from the last pass).You can use the method `run()` instead of `sim.run()` in the following cell if you want to run both cap extraction and lom analysis in a single step. If so, make sure to also tweak the setup for the lom analysis. The input parameters are otherwise the same for the two methods. ###Code c1.sim.run(components=['Q1'], open_terminations=[('Q1', 'readout'), ('Q1', 'bus1'), ('Q1', 'bus2')]) c1.sim.capacitance_matrix ###Output _____no_output_____ ###Markdown (otional - case-dependent)If the previous cell was interrupted due to license limitations and for any reason you finally manually launched the simulation from the renderer GUI (outside qiskit-metal) you might be able to recover the simulation results by uncommenting and executing the following cell ###Code #c1.sim._get_results_from_renderer() #c1.sim.capacitance_matrix ###Output _____no_output_____ ###Markdown The last variables you pass to the `run()` or `sim.run()` methods, will be stored in the `sim.setup` dictionary under the key `run`. You can recall the information passed by either accessing the dictionary directly, or by using the print handle below. ###Code # c1.setup.run <- direct access c1.sim.print_run_args() ###Output _____no_output_____ ###Markdown You can re-run the analysis after varying the parameters.Not passing the parameter `components` to the `sim.run()` method, skips the rendering and tries to run the analysis on the latest design. If a design is not found, the full metal design is rendered. ###Code c1.sim.setup.freq_ghz = 4.8 c1.sim.run() c1.sim.capacitance_matrix type(c1.sim.capacitance_matrix) ###Output _____no_output_____ ###Markdown Lumped oscillator model (LOM)Using capacitance matrices obtained from each pass, save the many parameters of the Hamiltonian of the system. `get_lumped_oscillator()` operates on 4 setup parameters: Lj: float Cj: float fr: Union[list, float] fb: Union[list, float] ###Code c1.setup.junctions = Dict({'Lj': 12.31, 'Cj': 2}) c1.setup.freq_readout = 7.0 c1.setup.freq_bus = [6.0, 6.2] c1.run_lom() c1.lumped_oscillator_all c1.plot_convergence(); c1.plot_convergence_chi() ###Output _____no_output_____ ###Markdown Once you are done with your analysis, please close it with `close()`. This will free up resources currently occupied by qiskit-metal to communiate with the tool. ###Code c1.sim.close() ###Output _____no_output_____ ###Markdown 3. Directly access the renderer to modify other parameters ###Code c1.sim.start() c1.sim.renderer ###Output _____no_output_____ ###Markdown Every renderer will have its own collection of methods. Below an example with q3d Prepare and run a collection of predefined setupsThis is equivalent to going to the Project Manager panel in Ansys, right clicking on Analysis within the active Q3D design, selecting "Add Solution Setup...", and choosing/entering default values in the resulting popup window. You might want to do this to keep track of different solution setups, giving each of them a different/specific name. ###Code setup = c1.sim.renderer.new_ansys_setup(name = "Setup_demo", max_passes = 6) ###Output _____no_output_____ ###Markdown You can directly pass to `new_ansys_setup` all the setup parameters. Of course you will then need to run the individual setups by name as well. ###Code c1.sim.renderer.analyze_setup(setup.name) ###Output _____no_output_____ ###Markdown Get the capactiance matrix at a different passYou might want to use this if you intend to know what was the matrix at a different pass of the simulation. ###Code # Using the analysis results, get its capacitance matrix as a dataframe. c1.sim.renderer.get_capacitance_matrix(variation = '', solution_kind = 'AdaptivePass', pass_number = 5) ###Output _____no_output_____ ###Markdown Code to swap rows and columns in capacitance matrixfrom qiskit_metal.analyses.quantization.lumped_capacitive import df_reorder_matrix_basisdf_reorder_matrix_basis(fourq_q3d.get_capacitance_matrix(), 1, 2) Close the renderer ###Code c1.sim.close() ###Output _____no_output_____ ###Markdown Capacitance matrix and LOM analysis PrerequisiteYou need to have a working local installation of Ansys. 1. Create the design in Metal ###Code %reload_ext autoreload %autoreload 2 import qiskit_metal as metal from qiskit_metal import designs, draw from qiskit_metal import MetalGUI, Dict, Headings design = designs.DesignPlanar() gui = MetalGUI(design) from qiskit_metal.qlibrary.qubits.transmon_pocket import TransmonPocket from qiskit_metal.qlibrary.tlines.meandered import RouteMeander design.variables['cpw_width'] = '15 um' design.variables['cpw_gap'] = '9 um' ###Output _____no_output_____ ###Markdown In this example, the design consists of 4 qubits and 4 CPWs ###Code # Allow running the same cell here multiple times to overwrite changes design.overwrite_enabled = True ## Custom options for all the transmons options = dict( # Some options we want to modify from the defaults # (see below for defaults) pad_width = '425 um', pocket_height = '650um', # Adding 4 connectors (see below for defaults) connection_pads=dict( readout = dict(loc_W=+1,loc_H=-1, pad_width='200um'), bus1 = dict(loc_W=-1,loc_H=+1, pad_height='30um'), bus2 = dict(loc_W=-1,loc_H=-1, pad_height='50um') ) ) ## Create 4 transmons q1 = TransmonPocket(design, 'Q1', options = dict( pos_x='+2.42251mm', pos_y='+0.0mm', **options)) q2 = TransmonPocket(design, 'Q2', options = dict( pos_x='+0.0mm', pos_y='-0.95mm', orientation = '270', **options)) q3 = TransmonPocket(design, 'Q3', options = dict( pos_x='-2.42251mm', pos_y='+0.0mm', orientation = '180', **options)) q4 = TransmonPocket(design, 'Q4', options = dict( pos_x='+0.0mm', pos_y='+0.95mm', orientation = '90', **options)) RouteMeander.get_template_options(design) options = Dict( lead=Dict( start_straight='0.2mm', end_straight='0.2mm'), trace_gap='9um', trace_width='15um') def connect(component_name: str, component1: str, pin1: str, component2: str, pin2: str, length: str, asymmetry='0 um', flip=False, fillet='90um'): """Connect two pins with a CPW.""" myoptions = Dict( fillet=fillet, hfss_wire_bonds = True, pin_inputs=Dict( start_pin=Dict( component=component1, pin=pin1), end_pin=Dict( component=component2, pin=pin2)), total_length=length) myoptions.update(options) myoptions.meander.asymmetry = asymmetry myoptions.meander.lead_direction_inverted = 'true' if flip else 'false' return RouteMeander(design, component_name, myoptions) asym = 140 cpw1 = connect('cpw1', 'Q1', 'bus2', 'Q2', 'bus1', '6.0 mm', f'+{asym}um') cpw2 = connect('cpw2', 'Q3', 'bus1', 'Q2', 'bus2', '6.1 mm', f'-{asym}um', flip=True) cpw3 = connect('cpw3', 'Q3', 'bus2', 'Q4', 'bus1', '6.0 mm', f'+{asym}um') cpw4 = connect('cpw4', 'Q1', 'bus1', 'Q4', 'bus2', '6.1 mm', f'-{asym}um', flip=True) gui.rebuild() gui.autoscale() ###Output _____no_output_____ ###Markdown 2. Capacitance Analysis and LOM derivation using the analysis package - most users Capacitance AnalysisSelect the analysis you intend to run from the `qiskit_metal.analyses` collection.Select the design to analyze and the tool to use for any external simulation ###Code from qiskit_metal.analyses.quantization import LOManalysis c1 = LOManalysis(design, "q3d") ###Output _____no_output_____ ###Markdown (optional) You can review and update the Analysis default setup following the examples in the next two cells. ###Code c1.sim.setup # example: update single setting c1.sim.setup.max_passes = 6 # example: update multiple settings c1.sim.setup_update(solution_order = 'Medium', auto_increase_solution_order = 'False') c1.sim.setup ###Output _____no_output_____ ###Markdown Analyze a single qubit with 2 endcaps using the default (or edited) analysis setup. Then show the capacitance matrix (from the last pass).You can use the method `run()` instead of `sim.run()` in the following cell if you want to run both cap extraction and lom analysis in a single step. If so, make sure to also tweak the setup for the lom analysis. The input parameters are otherwise the same for the two methods. ###Code c1.sim.run(components=['Q1'], open_terminations=[('Q1', 'readout'), ('Q1', 'bus1'), ('Q1', 'bus2')]) c1.sim.capacitance_matrix ###Output _____no_output_____ ###Markdown The last variables you pass to the `run()` or `sim.run()` methods, will be stored in the `sim.setup` dictionary under the key `run`. You can recall the information passed by either accessing the dictionary directly, or by using the print handle below. ###Code # c1.setup.run <- direct access c1.sim.print_run_args() ###Output _____no_output_____ ###Markdown You can re-run the analysis after varying the parameters.Not passing the parameter `components` to the `sim.run()` method, skips the rendering and tries to run the analysis on the latest design. If a design is not found, the full metal design is rendered. ###Code c1.sim.setup.freq_ghz = 4.8 c1.sim.run() c1.sim.capacitance_matrix type(c1.sim.capacitance_matrix) ###Output _____no_output_____ ###Markdown Lumped oscillator model (LOM)Using capacitance matrices obtained from each pass, save the many parameters of the Hamiltonian of the system. `get_lumped_oscillator()` operates on 4 setup parameters: Lj: float Cj: float fr: Union[list, float] fb: Union[list, float] ###Code c1.setup.junctions = Dict({'Lj': 12.31, 'Cj': 2}) c1.setup.freq_readout = 7.0 c1.setup.freq_bus = [6.0, 6.2] c1.run_lom() c1.lumped_oscillator_all c1.plot_convergence(); c1.plot_convergence_chi() ###Output _____no_output_____ ###Markdown Once you are done with your analysis, please close it with `close()`. This will free up resources currently occupied by qiskit-metal to communiate with the tool. ###Code c1.sim.close() ###Output _____no_output_____ ###Markdown 3. Directly access the renderer to modify other parameters ###Code c1.sim.start() c1.sim.renderer ###Output _____no_output_____ ###Markdown Every renderer will have its own collection of methods. Below an example with q3d Prepare and run a collection of predefined setupsThis is equivalent to going to the Project Manager panel in Ansys, right clicking on Analysis within the active Q3D design, selecting "Add Solution Setup...", and choosing/entering default values in the resulting popup window. You might want to do this to keep track of different solution setups, giving each of them a different/specific name. ###Code setup = c1.sim.renderer.new_ansys_setup(name = "Setup_demo", max_passes = 6) ###Output _____no_output_____ ###Markdown You can directly pass to `new_ansys_setup` all the setup parameters. Of course you will then need to run the individual setups by name as well. ###Code c1.sim.renderer.analyze_setup(setup.name) ###Output _____no_output_____ ###Markdown Get the capactiance matrix at a different passYou might want to use this if you intend to know what was the matrix at a different pass of the simulation. ###Code # Using the analysis results, get its capacitance matrix as a dataframe. c1.sim.renderer.get_capacitance_matrix(variation = '', solution_kind = 'AdaptivePass', pass_number = 5) ###Output _____no_output_____ ###Markdown Code to swap rows and columns in capacitance matrixfrom qiskit_metal.analyses.quantization.lumped_capacitive import df_reorder_matrix_basisdf_reorder_matrix_basis(fourq_q3d.get_capacitance_matrix(), 1, 2) Close the renderer ###Code c1.sim.close() ###Output _____no_output_____
Stacks, Queues and Deque's.ipynb
###Markdown QUEUEADT description:```Q.enqueue(e)Q.dequeue()Q.is_empty()Q.front()len(Q)``` Implementation:* Using a List:```enqueue -> List.append()dequeue -> List.pop(0)The problem is List.pop(0) runs in O(n) time which is inefficient.```* Using a List without shifting n elements for each dequeue works but the size of the list is O(m) instead of O(n) where m is the number of enqueue operations and n is the number of elements in the queue.* Using a Circular Buffer. i.e The list indices will be modified to point to the beginning of the queue. ###Code class ArrayQueue: DEF_CAP = 10 def __init__(self): self._data = [None] * ArrayQueue.DEF_CAP self._size = 0 self._front = 0 def __len__(self): return self._size def is_empty(self): return self._size == 0 def first(self): if self.is_empty(): raise Empty('Queue is empty') return self._data[self._front] def dequeue(self): if self.is_empty(): raise Empty('Queue is empty') answer = self._data[self._front] self._data[self._front] = None self._front = (self._front + 1) % len(self._data) # size of array, not the Queue self._size -= 1 if 0 < self._size < len(self._data)//4: self._resize(len(self._data)//2) return answer def _resize(self,cap): old = self._data self._data = [None] * cap walk = self._front for i in range(self._size): self._data[i] = old[walk] walk = (walk + 1) % len(old) self._front = 0 #On resize, Q front and Q data changes not its size but underlying array size changes def enqueue(self,e): if self._size == len(self._data): self._resize(2 * self._size) avail = (self._front + self._size) % len(self._data) self._data[avail] = e self._size += 1 Q = ArrayQueue() print(Q._data) Q.enqueue(1) print(Q._data) Q.enqueue(1) print(Q._data) for i in range(4,9): Q.enqueue(i) print(Q._data) Q.dequeue() Q.dequeue() for i in range(9,14): Q.enqueue(i) print(Q._data) ###Output [None, None, None, None, None, None, None, None, None, None] [1, None, None, None, None, None, None, None, None, None] [1, 1, None, None, None, None, None, None, None, None] [1, 1, 4, None, None, None, None, None, None, None] [1, 1, 4, 5, None, None, None, None, None, None] [1, 1, 4, 5, 6, None, None, None, None, None] [1, 1, 4, 5, 6, 7, None, None, None, None] [1, 1, 4, 5, 6, 7, 8, None, None, None] [None, None, 4, 5, 6, 7, 8, 9, None, None] [None, None, 4, 5, 6, 7, 8, 9, 10, None] [None, None, 4, 5, 6, 7, 8, 9, 10, 11] [12, None, 4, 5, 6, 7, 8, 9, 10, 11] [12, 13, 4, 5, 6, 7, 8, 9, 10, 11] ###Markdown DEQUE_Pronounced 'Deck'_ These are double ended Queues. The deque ADT:```D.add_first(e)D.add_last(e)D.delete_first()D.delete_last()D.first()D.last()D.is_empty()len(D)```Implementation:* I think a circular buffer like previously ought to work. Apparently it is quite similar to ArrayQueue. Let's try to implement it. ###Code class ArrayDeque: DEF_CAP = 10 def __init__(self): self._data = [None] * self.DEF_CAP self._size = 0 self._front = 0 def __len__(self): return self._size def is_empty(self): return self._size == 0 def first(self): return self._data[self._front] def last(self): back = ( self._front + self._size - 1 ) % len(self._data) return self._data[back] def _resize(self,cap): old = self._data self._data = [None] * cap walk = self._front for i in range(self._size): self._data[i] = old[walk] walk = (walk + 1) % len(self._data) self._front = 0 def _add(self,pos,e): if self._size == len(self._data): self._resize(2 * self._size) if pos == 'f': self._front = (self._front - 1) % len(self._data) self._data[self._front] = e self._size +=1 elif pos == 'b': avail = (self._front + self._size ) % len(self._data) self._data[avail] = e self._size +=1 def add_first(self,e): self._add('f',e) def add_last(self,e): self._add('b',e) def delete_first(self): if self.is_empty(): raise Empty('Deque is empty') old = self._data[self._front] self._data[self._front] = None self._front = (self._front + 1) % len(self._data) self._size -= 1 if 0 < self._size < len(self._data)//4: self._resize(len(self._data)//2) return old def delete_last(self): if self.is_empty(): raise Empty('Deque is empty') old_index = (self._front + self._size - 1) % len(self._data) old = self._data[old_index] self._data[old_index] = None self._size -= 1 if 0 < self._size < len(self._data)//4: self._resize(len(self._data)//2) return old D = ArrayDeque() D.add_first(1) print(D.first(),D.last()) print(D._data) D.add_last(2) print(D._data) D.add_first(3) print(D._data) print(D.first()) D.delete_first() print(D._data) print(D.last()) D.delete_last() print(D._data) D.first() == D.last() D.delete_last() D._data ###Output _____no_output_____ ###Markdown Chapter Exercises ###Code def stack_transfer(S:ArrayStack, T:ArrayStack): while len(S)!=0: T.push(S.pop()) return T A = ArrayStack() B = ArrayStack() for i in range(10): A.push(i) print(A._data) print(B._data) stack_transfer(A,B)._data def recur_del(S:ArrayStack): 'Recursively delete stack elements' if len(S) ==0: return True else: S.pop() return recur_del(S) temp = ArrayStack() for i in range(10): temp.push(i) recur_del(temp) def list_rev(x:list): S = ArrayStack() for i in x: S.push(i) temp = [] while len(S) != 0: temp.append(S.pop()) return temp list_rev([1,2,3]) Q = ArrayQueue() for i in range(5): try: Q.dequeue() except Exception as e: print(e) for i in range(30): Q.enqueue(i) print(Q._data) for i in range(10): Q.dequeue() Q.enqueue(100) Q.enqueue(100) print(Q._data) Q._front Q._size D = ArrayDeque() for i in range(1,9): D.add_last(i) Q = ArrayQueue() # Initially D has (1,2,3,4,5,6,7,8). FInally we need (1,2,3,5,4,6,7,8) # [5,6,7,8], [1,2,3,4] # [5,6,7,8,4], [1,2,3] # [6,7,8,4], [1,2,3,5] # [6,7,8], [1,2,3,5,4] # [1,2,3,5,4,6,7,8], [] is_matched_html('<li> What color is the boat? </li>') ###Output _____no_output_____
playground/game/playground.ipynb
###Markdown Blackboard ###Code import numpy as np from collections import defaultdict, Counter import matplotlib.pyplot as plt class Deck(object): def __init__(self, numbers, seeds): self.n, self.s = numbers, seeds self.shuffle() self.counter = defaultdict(lambda: 0) def shuffle(self): self.deck = [] for num in self.n: for seed in self.s: self.deck.append((num, seed)) def serve(self, cards=5, reinsert=True): hand = [self.deck[x] for x in np.random.choice(range(len(self.deck)), cards, replace=False)] if not reinsert: self.deck = [x for x in self.deck if x not in hand] return hand def run_test(self, iterations): for i in range(iterations): h = self.serve(reinsert=True) self.counter[Deck.comb(h)] += 1 @staticmethod def comb(hand): numbers = [x for x, y in hand] return tuple([y for x, y in Counter(numbers).most_common()]) n, s = range(1, 14), ['C', 'Q', 'F', 'P'] d = Deck(n, s) results = [] for i in range(100): d = Deck(n, s) d.run_test(5000) c_map = [(1, 1, 1, 1, 1), (2, 1, 1, 1), (2, 2, 1), (3, 1, 1), (3, 2), (4, 1)] res = np.zeros(len(c_map)) for k, v in d.counter.items(): res[c_map.index(k)] = v results.append(res / res.sum()) M = np.array(results) c = (2, 2, 1) i = c_map.index(c) i_r = M[:,i] print (np.argmax(i_r), i_r.mean(), i_r.std()) plt.boxplot(i_r) plt.show() ###Output 81 0.047850000000000004 0.0032312691005238177
report_notebooks/LanguageModelReports.ipynb
###Markdown Language Model ReportsSince my local machine does not have GPU support and thus can't perform many model training and evaluation tasks in a reasonable amount of time, I have created a script `evaluation.lua` in this repository which generates reports on a language model and serializes them in JSON. This notebook will consume these reports and explore them. It will also include some information about the models these reports were made for that is not included in the serialized report. ###Code # load some requirements import json import matplotlib.pyplot as plt with open('reports/unweightednoavg_one_layer_12.json', 'r') as f: first_report = json.loads(f.read()) with open('reports/unweightednoavg_7.json', 'r') as f: second_report = json.loads(f.read()) with open('reports/unweightednoavg_4.json', 'r') as f: third_report = json.loads(f.read()) ###Output _____no_output_____ ###Markdown 25K Shallower, Broader Network Trained With AdamI created a model with 1 LSTM layer, a dropout of 0.1, and a hidden size of 300. Here we can look at it's structure:```nn.Sequential { [input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> output] (1): nn.LookupTable (2): nn.LSTM(100 -> 512) (3): nn.Dropout(0.10000) (4): nn.DynamicView (5): nn.Linear(300 -> 25000) (6): nn.LogSoftMax}```Notably, this one is a layer shallower and has a larger hidden size, with slightly reduced dropout. While it is not captured in the report, this model converged to it's final loss more quickly than the previous model. The use of adam also lead to considerably lower loss Perplexity on the DatasetsThis model experienced a reduced perplexity across each of the datasets: ###Code # print out the losses from the report print 'Training set perplexity:', first_report['train_perplexity'] print 'Validation set perplexity:', first_report['valid_perplexity'] print 'Test set perplexity:', first_report['test_perplexity'] ###Output Training set perplexity: 143.271405408 Validation set perplexity: 228.638472902 Test set perplexity: 229.812204025 ###Markdown Loss vs EpochLoss is charted vs. current epoch, with labels of the learning rate used at each epoch NOTE: In the first several series, loss is on the last training example. Current implementation calculates average loss, but this is not reflected in early series ###Code with open('logs/log_series.json', 'r') as f: logs = json.loads(f.read()) for k in logs.keys(): plt.plot(logs[k][0], logs[k][1], label=str(k)) plt.title('Loss v. Epoch') plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() plt.show() # function for turning report data into scatter plot def scatterize_batch_loss(report_batch_loss): x = [] y = [] for i, v in enumerate(report_batch_loss): if i > 50: break # We'll only consider ones of length 50 and below to get a better view of the data in the chart. if isinstance(v, list): x.extend([i + 1 for j in v]) # same batch size for all losses in v y.extend([j for j in v]) else: if v is not None: x.append(i) y.append(v) return x, y %matplotlib inline x, y = scatterize_batch_loss(first_report['train_batch_perplexities']) plt.scatter(x, y) plt.title('Training Perplexity v. Sequence Length') plt.xlabel('Sequence Length') plt.ylabel('Perplexity') plt.show() %matplotlib inline x, y = scatterize_batch_loss(first_report['valid_batch_perplexities']) plt.scatter(x, y) plt.title('Validation Perplexity v. Sequence Length') plt.xlabel('Sequence Length') plt.ylabel('Perplexity') plt.show() %matplotlib inline x, y = scatterize_batch_loss(first_report['test_batch_perplexities']) plt.scatter(x, y) plt.title('Test Perplexity v. Sequence Length') plt.xlabel('Sequence Length') plt.ylabel('Perplexity') plt.show() ###Output _____no_output_____ ###Markdown Notably, this model has a loss below 6 for sequences that are ~10 words or less. Generation SamplesWe can also look at examples of how it generates text. Below are side by side comparisons of the labels from the training/validation/test set and the sentence the model generated. A Special `` token will be placed in the generated sequence to illustrate where the model's input ends and it's generation begins. I chose to look at only short sequences, as the models each have lower loss for these, and might stand a chance of answering correctly. ###Code def print_sample(sample): seq = sample['generated'].split(' ') seq.insert(sample['supplied_length'] + 1, '<G>') gold = sample['gold'].split(' ') gold.insert(sample['supplied_length'], '<G>') print('Gend: ' + ' '.join(seq)) print('True: ' + seq[1] + ' ' + ' '.join(gold) + '\n') for sample in first_report['train_samples'][5:]: print_sample(sample) for sample in first_report['valid_samples'][0:5]: print_sample(sample) for sample in first_report['test_samples'][0:5]: print_sample(sample) ###Output Gend: Even the basic <UNK> wasn 't being done <G> to the <UNK> <UNK> . </S> True: Even the basic <UNK> wasn 't being done <G> . </S> Gend: Its <G> <UNK> <UNK> , <UNK> <UNK> , <UNK> <UNK> , <UNK> True: Its <G> findings and those of other research reports follow . </S> Gend: And he kept his promise , " <UNK> <G> " and " <UNK> " is a <UNK> <UNK> . True: And he kept his promise , " <UNK> <G> recalls . </S> Gend: You have to have a soul for that . <G> </S> True: You have to have a soul for that . <G> </S> Gend: Eventually , it began to show . <G> </S> True: Eventually , it began to show . <G> </S> ###Markdown ConclusionThis model has lower loss and doesn't seem to make quite as many gibberish mistakes in generation (double periods, long strings of ``, etc.) This is perhaps too small of a sample to make a real conclusion though. Like the previous model, it tends to favor abrupt endings, as it likely is being punished less for only getting a couple tokens wrong instead of a long sequence of wrong answers. It is also leaves an idea hanging, ending sentences with "the", etc. 25K Deeper, Thinner NetworkI created a model with 2 LSTM layers, a dropout of 0.1, and a hidden size of 300. Here we can look at it's structure:```nn.Sequential { [input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> output] (1): nn.LookupTable (2): nn.LSTM(100 -> 300) (3): nn.Dropout(0.100000) (4): nn.LSTM(300 -> 300) (5): nn.Dropout(0.100000) (6): nn.DynamicView (7): nn.Linear(300 -> 25000) (8): nn.LogSoftMax}``` Losses on the DatasetsI have created 3 datasets, built from the Google Billion Words data set. I trained on a version of the `train_small` data set with a reduced vocabulary of 25000, in batches of size 50, with a sequence length cut off of 30. I did not tune any hyper parameters with the validation set, but this could be future work. There is also a small test set. ###Code # print out the losses from the report print 'Training set loss:', second_report['train_perplexity'] print 'Validation set loss:', second_report['valid_perplexity'] print 'Test set loss:', second_report['test_perplexity'] with open('logs/log_series_2_layer.json', 'r') as f: logs = json.loads(f.read()) for k in logs.keys(): plt.plot(logs[k][0], logs[k][1], label=str(k)) plt.title('Loss v. Epoch') plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Loss Versus Sequence LengthWe can examine the relationship between loss and sequence length. We can expect higher losses with increasing sequence length as more information must be remembered by the model as it generates, and the model is only trained on examples of sequence length 30 or less. We can generate a scatter plot of batch loss v. sequence length of batch (all batches are same size): ###Code %matplotlib inline x, y = scatterize_batch_loss(second_report['train_batch_perplexities']) plt.scatter(x, y) plt.title('Training Perplexity v. Sequence Length') plt.xlabel('Sequence Length') plt.ylabel('Perplexity') plt.show() %matplotlib inline x, y = scatterize_batch_loss(second_report['valid_batch_perplexities']) plt.scatter(x, y) plt.title('Validation Perplexity v. Sequence Length') plt.xlabel('Sequence Length') plt.ylabel('Perplexity') plt.show() %matplotlib inline x, y = scatterize_batch_loss(second_report['test_batch_perplexities']) plt.scatter(x, y) plt.title('Test Perplexity v. Sequence Length') plt.xlabel('Sequence Length') plt.ylabel('Perplexity') plt.show() ###Output _____no_output_____ ###Markdown Generation SamplesWe can also look at examples of how it generates text. Below are side by side comparisons of the labels from the training/validation/test set and the sentence the model generated. A Special `` token will be placed in the generated sequence to illustrate where the model's input ends and it's generation begins. Training Set Generation Examples ###Code for sample in second_report['train_samples']: print_sample(sample) for sample in second_report['valid_samples'][0:5]: print_sample(sample) for sample in second_report['test_samples'][0:5]: print_sample(sample) ###Output Gend: That would also be a <G> <UNK> of the <UNK> <UNK> , which is the <UNK> True: That would also be a <G> good idea for TVs . </S> Gend: Nothing personal : just <G> <UNK> the <UNK> of the <UNK> <UNK> , <UNK> , True: Nothing personal : just <G> habits . </S> Gend: <UNK> is a prescription <G> drug addiction to the <UNK> of the <UNK> , which True: <UNK> is a prescription <G> device . </S> Gend: Now we are <G> <UNK> , " said <UNK> <UNK> , a former <UNK> True: Now we are <G> starting anew . </S> Gend: A problem at past <G> <UNK> , the <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> <UNK> True: A problem at past <G> Games has been empty seats . </S> ###Markdown ConclusionWhile we can see this model has the expected distribution of losses over each set, and does not over fit, it doesn't generate coherent conclusions to the input sentence fragments. In terms of generation quality, it leaves a lot to be desired. Same Network, Earlier EpochI created a model with 2 LSTM layers, a dropout of 0.1, and a hidden size of 300. Here we can look at it's structure:```nn.Sequential { [input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> output] (1): nn.LookupTable (2): nn.LSTM(100 -> 300) (3): nn.Dropout(0.100000) (4): nn.LSTM(300 -> 300) (5): nn.Dropout(0.100000) (6): nn.DynamicView (7): nn.Linear(300 -> 25000) (8): nn.LogSoftMax}``` Losses on the DatasetsI have created 3 datasets, built from the Google Billion Words data set. I trained on a version of the `train_small` data set with a reduced vocabulary of 25000, in batches of size 50, with a sequence length cut off of 30. I did not tune any hyper parameters with the validation set, but this could be future work. There is also a small test set. ###Code # print out the losses from the report print 'Training set loss:', third_report['train_perplexity'] print 'Validation set loss:', third_report['valid_perplexity'] print 'Test set loss:', third_report['test_perplexity'] with open('logs/log_series_2_layer.json', 'r') as f: logs = json.loads(f.read()) for k in logs.keys(): plt.plot(logs[k][0], logs[k][1], label=str(k)) plt.title('Loss v. Epoch') plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Loss Versus Sequence LengthWe can examine the relationship between loss and sequence length. We can expect higher losses with increasing sequence length as more information must be remembered by the model as it generates, and the model is only trained on examples of sequence length 30 or less. We can generate a scatter plot of batch loss v. sequence length of batch (all batches are same size): ###Code %matplotlib inline x, y = scatterize_batch_loss(third_report['train_batch_perplexities']) plt.scatter(x, y) plt.title('Training Perplexity v. Sequence Length') plt.xlabel('Sequence Length') plt.ylabel('Perplexity') plt.show() %matplotlib inline x, y = scatterize_batch_loss(third_report['valid_batch_perplexities']) plt.scatter(x, y) plt.title('Validation Perplexity v. Sequence Length') plt.xlabel('Sequence Length') plt.ylabel('Perplexity') plt.show() %matplotlib inline x, y = scatterize_batch_loss(third_report['test_batch_perplexities']) plt.scatter(x, y) plt.title('Test Perplexity v. Sequence Length') plt.xlabel('Sequence Length') plt.ylabel('Perplexity') plt.show() ###Output _____no_output_____ ###Markdown Generation SamplesWe can also look at examples of how it generates text. Below are side by side comparisons of the labels from the training/validation/test set and the sentence the model generated. A Special `` token will be placed in the generated sequence to illustrate where the model's input ends and it's generation begins. Training Set Generation Examples ###Code for sample in third_report['train_samples']: print_sample(sample) for sample in third_report['valid_samples'][0:5]: print_sample(sample) for sample in third_report['test_samples'][0:5]: print_sample(sample) ###Output Gend: " The ' Yes <G> ' is a <UNK> , but it 's a <UNK> True: " The ' Yes <G> on <UNK> . </S> Gend: Tuesday was <G> the first time the <UNK> <UNK> was <UNK> by the True: Tuesday was <G> about <UNK> celebratory dances . </S> Gend: Two <G> of the <UNK> were <UNK> , <UNK> , <UNK> , True: Two <G> others sustained broken bones . </S> Gend: Luxembourg 's airline <UNK> <UNK> the turnaround <G> in the first quarter of 2008 , while the dollar True: Luxembourg 's airline <UNK> <UNK> the turnaround <G> . </S> Gend: Unfortunately , these incidents were planned . <G> </S> True: Unfortunately , these incidents were planned . <G> </S>
demo/notebooks/[vnm_ocr_toolbox]_Train_PAN_for_Text_Detection.ipynb
###Markdown Clone repo ###Code !git clone https://github.com/kaylode/vnm-ocr-toolbox.git main %cd main %cd main !git checkout master !git reset --hard HEAD !git pull ###Output [Errno 2] No such file or directory: 'main' /content/main Already on 'master' Your branch is up to date with 'origin/master'. HEAD is now at c77a538 fix importing Already up to date. ###Markdown Install dependencies ###Code %%capture %cd /content/main/ !pip install -r requirements.txt ###Output _____no_output_____ ###Markdown Load and prepare data ###Code !mkdir '/content/main/data' %cd /content/main/data from google_drive_downloader import GoogleDriveDownloader as gdd gdd.download_file_from_google_drive(file_id='1bJunF1BZvVI5kx-AHCOkQXIKG7q0FkD5', dest_path='./SROIE2019.zip', unzip=True) %cd main from dataset.prepare import convert_sroie19_to_coco convert_sroie19_to_coco("/content/main/data/SROIE2019", "/content/main/data/sroie19") ###Output 0it [00:00, ?it/s] ###Markdown Train Detection model ###Code %reload_ext tensorboard %tensorboard --logdir="./weights" %cd main import os from tool.config import Config from modules.detection import get_model, get_loss, Trainer, get_dataloader, get_metric config = Config("./tool/config/detection/configs.yaml") class Arguments: print_per_iter = 10 val_interval = 1 save_interval = 50 resume = None saved_path = './weights' freeze_backbone = True args = Arguments os.environ['CUDA_VISIBLE_DEVICES'] = config.gpu_devices trainloader, valloader = get_dataloader(config) criterion = get_loss(config.loss).cuda() metric = get_metric(config) model = get_model(config.model) trainer = Trainer(args=args, config=config, model=model, criterion=criterion, metric=metric, train_loader=trainloader, val_loader=valloader) trainer.train() ###Output loading annotations into memory... Done (t=0.23s) creating index... index created! loading annotations into memory... Done (t=0.05s) creating index... index created! loading annotations into memory... Done (t=0.04s) creating index... index created! ###Markdown Inference ###Code # Load image import cv2 img = cv2.imread("/content/main/data/sroie19/images/val/X00016469670.jpg") img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Load model from modules.detection import PAN, draw_bbox config = Config("./tool/config/detection/configs.yaml") model = PAN(config, model_path = "/content/main/weights/2021-06-16_11-52-23/checkpoint/PANNet_last.pth") # Predict and show result import matplotlib.pyplot as plt _, boxes_list, _ = model.predict(img) res = draw_bbox(img, boxes_list) plt.imshow(res) plt.show() ###Output /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:3458: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)
notebooks/traversal/SSSP.ipynb
###Markdown Single Source Shortest Path (SSSP)In this notebook, we will use cuGraph to compute the shortest path from a starting vertex to every other vertex in our training dataset.Notebook Credits* Original Authors: Bradley Rees and James Wyles* available since release 0.6* Last Edit: 08/16/2020RAPIDS Versions: 0.12.0 Test Hardware* GV100 32G, CUDA 10.0 IntroductionSingle source shortest path computes the shortest paths from the given starting vertex to all other reachable vertices. To compute SSSP for a graph in cuGraph we use:**cugraph.sssp(G, source)**Input* __G__: cugraph.Graph object* __source__: int, Index of the source vertexReturns * __df__: a cudf.DataFrame object with two columns: * df['vertex']: The vertex identifier for the vertex * df['distance']: The computed distance from the source vertex to this vertex * df['predecessor']: The predecessor vertex along this paths. Allows paths to be recreated Some notes about vertex IDs...* The current version of cuGraph requires that vertex IDs be representable as 32-bit integers, meaning graphs currently can contain at most 2^32 unique vertex IDs. However, this limitation is being actively addressed and a version of cuGraph that accommodates more than 2^32 vertices will be available in the near future.* cuGraph will automatically renumber graphs to an internal format consisting of a contiguous series of integers starting from 0, and convert back to the original IDs when returning data to the caller. If the vertex IDs of the data are already a contiguous series of integers starting from 0, the auto-renumbering step can be skipped for faster graph creation times. * To skip auto-renumbering, set the `renumber` boolean arg to `False` when calling the appropriate graph creation API (eg. `G.from_cudf_edgelist(gdf_r, source='src', destination='dst', renumber=False)`). * For more advanced renumbering support, see the examples in `structure/renumber.ipynb` and `structure/renumber-2.ipynb` Test DataWe will be using the Zachary Karate club dataset *W. W. Zachary, An information flow model for conflict and fission in small groups, Journal ofAnthropological Research 33, 452-473 (1977).*![Karate Club](../img/zachary_black_lines.png)This is a small graph which allows for easy visual inspection to validate results. __Note__: The Karate dataset starts with vertex ID 1 which the cuGraph analytics assume a zero-based starting ID. ###Code # Import needed libraries import cudf import cugraph ###Output _____no_output_____ ###Markdown Read the data and adjust the vertex IDs ###Code # Test file - using the classic Karate club dataset. datafile='../data/karate-data.csv' gdf = cudf.read_csv(datafile, names=["src", "dst"], delimiter='\t', dtype=["int32", "int32"]) # The SSSP algorithm requires that there are weights. Just use 1.0 here (equivalent to BFS) gdf["data"] = 1.0 gdf.head() ###Output _____no_output_____ ###Markdown Create a Graph and call SSSP ###Code # create a Graph G = cugraph.Graph() G.from_cudf_edgelist(gdf, source='src', destination='dst', edge_attr='data') # Call cugraph.sssp to get the distances from vertex 1: df = cugraph.sssp(G, 1) # Print the paths for index, row in df.to_pandas().iterrows(): v = int(row['vertex']) p = cugraph.utils.get_traversed_path_list(df, v) print(v, ': ', p) ###Output _____no_output_____ ###Markdown Single Source Shortest Path (SSSP)In this notebook, we will use cuGraph to compute the shortest path from a starting vertex to everyother vertex in our training dataset.Notebook Credits* Original Authors: Bradley Rees and James Wyles* available since rerlease 0.6* Last Edit: 07/08/2020RAPIDS Versions: 0.12.0 Test Hardware* GV100 32G, CUDA 10.0 IntroductionSingle source shortest path computes the shortest paths from the given starting vertex to all other reachable vertices. To compute SSSP for a graph in cuGraph we use:**cugraph.sssp(G, source)**Input* __G__: cugraph.Graph object* __source__: int, Index of the source vertexReturns * __df__: a cudf.DataFrame object with two columns: * df['vertex']: The vertex identifier for the vertex * df['distance']: The computed distance from the source vertex to this vertex * df['predecessor']: The predecessor vertex along this paths. Allows paths to be recreated cuGraph Notice The current version of cuGraph has some limitations:* Vertex IDs need to be 32-bit integers.* Vertex IDs are expected to be contiguous integers starting from 0.cuGraph provides the renumber function to mitigate this problem. Input vertex IDs for the renumber function can be either 32-bit or 64-bit integers, can be non-contiguous, and can start from an arbitrary number. The renumber function maps the provided input vertex IDs to 32-bit contiguous integers starting from 0. cuGraph still requires the renumbered vertex IDs to be representable in 32-bit integers. These limitations are being addressed and will be fixed soon. Test DataWe will be using the Zachary Karate club dataset *W. W. Zachary, An information flow model for conflict and fission in small groups, Journal ofAnthropological Research 33, 452-473 (1977).*![Karate Club](../img/zachary_black_lines.png)This is a small graph which allows for easy visual inspection to validate results. __Note__: The Karate dataset starts with vertex ID 1 which the cuGraph analytics assume a zero-based starting ID. ###Code # Import needed libraries import cudf import cugraph ###Output _____no_output_____ ###Markdown Read the data and adjust the vertex IDs ###Code # Test file - using the clasic Karate club dataset. datafile='../data/karate-data.csv' gdf = cudf.read_csv(datafile, names=["src", "dst"], delimiter='\t', dtype=["int32", "int32"]) # The SSSP algorithm requires that there are weights. Just use 1.0 here (equivalent to BFS) gdf["data"] = 1.0 gdf.head() ###Output _____no_output_____ ###Markdown Create a Graph and call SSSP ###Code # create a Graph G = cugraph.Graph() G.from_cudf_edgelist(gdf, source='src', destination='dst', edge_attr='data') # Call cugraph.sssp to get the distances from vertex 1: df = cugraph.sssp(G, 1) # Print the paths for index, row in df.to_pandas().iterrows(): v = int(row['vertex']) p = cugraph.utils.get_traversed_path_list(df, v) print(v, ': ', p) ###Output _____no_output_____ ###Markdown Single Source Shortest Path (SSSP)In this notebook, we will use cuGraph to compute the shortest path from a starting vertex to everyother vertex in our training dataset.Notebook Credits* Original Authors: Bradley Rees and James Wyles* available since rerlease 0.6* Last Edit: 08/16/2020RAPIDS Versions: 0.12.0 Test Hardware* GV100 32G, CUDA 10.0 IntroductionSingle source shortest path computes the shortest paths from the given starting vertex to all other reachable vertices. To compute SSSP for a graph in cuGraph we use:**cugraph.sssp(G, source)**Input* __G__: cugraph.Graph object* __source__: int, Index of the source vertexReturns * __df__: a cudf.DataFrame object with two columns: * df['vertex']: The vertex identifier for the vertex * df['distance']: The computed distance from the source vertex to this vertex * df['predecessor']: The predecessor vertex along this paths. Allows paths to be recreated Some notes about vertex IDs...* The current version of cuGraph requires that vertex IDs be representable as 32-bit integers, meaning graphs currently can contain at most 2^32 unique vertex IDs. However, this limitation is being actively addressed and a version of cuGraph that accommodates more than 2^32 vertices will be available in the near future.* cuGraph will automatically renumber graphs to an internal format consisting of a contiguous series of integers starting from 0, and convert back to the original IDs when returning data to the caller. If the vertex IDs of the data are already a contiguous series of integers starting from 0, the auto-renumbering step can be skipped for faster graph creation times. * To skip auto-renumbering, set the `renumber` boolean arg to `False` when calling the appropriate graph creation API (eg. `G.from_cudf_edgelist(gdf_r, source='src', destination='dst', renumber=False)`). * For more advanced renumbering support, see the examples in `structure/renumber.ipynb` and `structure/renumber-2.ipynb` Test DataWe will be using the Zachary Karate club dataset *W. W. Zachary, An information flow model for conflict and fission in small groups, Journal ofAnthropological Research 33, 452-473 (1977).*![Karate Club](../img/zachary_black_lines.png)This is a small graph which allows for easy visual inspection to validate results. __Note__: The Karate dataset starts with vertex ID 1 which the cuGraph analytics assume a zero-based starting ID. ###Code # Import needed libraries import cudf import cugraph ###Output _____no_output_____ ###Markdown Read the data and adjust the vertex IDs ###Code # Test file - using the clasic Karate club dataset. datafile='../data/karate-data.csv' gdf = cudf.read_csv(datafile, names=["src", "dst"], delimiter='\t', dtype=["int32", "int32"]) # The SSSP algorithm requires that there are weights. Just use 1.0 here (equivalent to BFS) gdf["data"] = 1.0 gdf.head() ###Output _____no_output_____ ###Markdown Create a Graph and call SSSP ###Code # create a Graph G = cugraph.Graph() G.from_cudf_edgelist(gdf, source='src', destination='dst', edge_attr='data') # Call cugraph.sssp to get the distances from vertex 1: df = cugraph.sssp(G, 1) # Print the paths for index, row in df.to_pandas().iterrows(): v = int(row['vertex']) p = cugraph.utils.get_traversed_path_list(df, v) print(v, ': ', p) ###Output _____no_output_____ ###Markdown Single Source Shortest Path (SSSP)In this notebook, we will use cuGraph to compute the shortest path from a starting vertex to everyother vertex in our training dataset.Notebook Credits* Original Authors: Bradley Rees and James Wyles* available since rerlease 0.6* Last Edit: 02/04/2020RAPIDS Versions: 0.12.0 Test Hardware* GV100 32G, CUDA 10.0 IntroductionSingle source shortest path computes the shortest paths from the given starting vertex to all other reachable vertices. To compute SSSP for a graph in cuGraph we use:**cugraph.sssp(G, source)**Input* __G__: cugraph.Graph object* __source__: int, Index of the source vertexReturns * __df__: a cudf.DataFrame object with two columns: * df['vertex']: The vertex identifier for the vertex * df['distance']: The computed distance from the source vertex to this vertex * df['predecessor']: The predecessor vertex along this paths. Allows paths to be recreated cuGraph Notice The current version of cuGraph has some limitations:* Vertex IDs need to be 32-bit integers.* Vertex IDs are expected to be contiguous integers starting from 0.cuGraph provides the renumber function to mitigate this problem. Input vertex IDs for the renumber function can be either 32-bit or 64-bit integers, can be non-contiguous, and can start from an arbitrary number. The renumber function maps the provided input vertex IDs to 32-bit contiguous integers starting from 0. cuGraph still requires the renumbered vertex IDs to be representable in 32-bit integers. These limitations are being addressed and will be fixed soon. Test DataWe will be using the Zachary Karate club dataset *W. W. Zachary, An information flow model for conflict and fission in small groups, Journal ofAnthropological Research 33, 452-473 (1977).*![Karate Club](../img/zachary_black_lines.png)This is a small graph which allows for easy visual inspection to validate results. __Note__: The Karate dataset starts with vertex ID 1 which the cuGraph analytics assume a zero-based starting ID. ###Code # Import needed libraries import cudf import cugraph ###Output _____no_output_____ ###Markdown Read the data and adjust the vertex IDs ###Code # Test file - using the clasic Karate club dataset. datafile='../data/karate-data.csv' gdf = cudf.read_csv(datafile, names=["src", "dst"], delimiter='\t', dtype=["int32", "int32"]) # Need to shift the vertex IDs to start with zero rather than one (next version of cuGraph will fix this issue) gdf["src_0"] = gdf["src"] - 1 gdf["dst_0"] = gdf["dst"] - 1 # The SSSP algorithm requires that there are weights. Just use 1.0 here (equivalent to BFS) gdf["data"] = 1.0 gdf.head() ###Output _____no_output_____ ###Markdown Create a Graph and call SSSP ###Code # create a Graph G = cugraph.Graph() G.from_cudf_edgelist(gdf, source='src_0', destination='dst_0', edge_attr='data') # Call cugraph.sssp to get the distances from vertex 1: df = cugraph.sssp(G, 1) # Print the paths # Not using the filterred dataframe to ensure that vertex IDs match row IDs for i in range(len(ldf)) : v = ldf['vertex'][i] d = int(df['distance'][v]) path = [None] * ( int(longest_distance) + 1) path[d] = v while d > 0 : v = df['predecessor'][v] d = int(df['distance'][v]) path[d] = v print( "(" + str(i) + ") path: " + str(path)) ###Output Farthest vertex is 15 with distance of 3.0 ###Markdown Single Source Shortest Path (SSSP)In this notebook, we will use cuGraph to compute the shortest path from a starting vertex to everyother vertex in our training dataset.Notebook Credits* Original Authors: Bradley Rees and James Wyles* available since rerlease 0.6* Last Edit: 02/04/2020RAPIDS Versions: 0.12.0 Test Hardware* GV100 32G, CUDA 10.0 IntroductionSingle source shortest path computes the shortest paths from the given starting vertex to all other reachable vertices. To compute SSSP for a graph in cuGraph we use:**cugraph.sssp(G, source)**Input* __G__: cugraph.Graph object* __source__: int, Index of the source vertexReturns * __df__: a cudf.DataFrame object with two columns: * df['vertex']: The vertex identifier for the vertex * df['distance']: The computed distance from the source vertex to this vertex * df['predecessor']: The predecessor vertex along this paths. Allows paths to be recreated cuGraph Notice The current version of cuGraph has some limitations:* Vertex IDs need to be 32-bit integers.* Vertex IDs are expected to be contiguous integers starting from 0.cuGraph provides the renumber function to mitigate this problem. Input vertex IDs for the renumber function can be either 32-bit or 64-bit integers, can be non-contiguous, and can start from an arbitrary number. The renumber function maps the provided input vertex IDs to 32-bit contiguous integers starting from 0. cuGraph still requires the renumbered vertex IDs to be representable in 32-bit integers. These limitations are being addressed and will be fixed soon. Test DataWe will be using the Zachary Karate club dataset *W. W. Zachary, An information flow model for conflict and fission in small groups, Journal ofAnthropological Research 33, 452-473 (1977).*![Karate Club](../img/zachary_black_lines.png)This is a small graph which allows for easy visual inspection to validate results. __Note__: The Karate dataset starts with vertex ID 1 which the cuGraph analytics assume a zero-based starting ID. ###Code # Import needed libraries import cudf import cugraph ###Output _____no_output_____ ###Markdown Read the data and adjust the vertex IDs ###Code # Test file - using the clasic Karate club dataset. datafile='../data/karate-data.csv' gdf = cudf.read_csv(datafile, names=["src", "dst"], delimiter='\t', dtype=["int32", "int32"]) # Need to shift the vertex IDs to start with zero rather than one (next version of cuGraph will fix this issue) gdf["src_0"] = gdf["src"] - 1 gdf["dst_0"] = gdf["dst"] - 1 # The SSSP algorithm requires that there are weights. Just use 1.0 here (equivalent to BFS) gdf["data"] = 1.0 gdf.head() ###Output _____no_output_____ ###Markdown Create a Graph and call SSSP ###Code # create a Graph G = cugraph.Graph() G.from_cudf_edgelist(gdf, source='src_0', destination='dst_0', edge_attr='data') # Call cugraph.sssp to get the distances from vertex 1: df = cugraph.sssp(G, 1) # Print the paths # Not using the filterred dataframe to ensure that vertex IDs match row IDs for i in range(len(df)) : v = df['vertex'][i] d = int(df['distance'][v]) path = [None] * ( int(d) + 1) path[d] = v while d > 0 : v = df['predecessor'][v] d = int(df['distance'][v]) path[d] = v print( "(" + str(i) + ") path: " + str(path)) ###Output (0) path: [1, 0] (1) path: [1] (2) path: [1, 2] (3) path: [1, 3] (4) path: [1, 0, 4] (5) path: [1, 0, 5] (6) path: [1, 0, 6] (7) path: [1, 7] (8) path: [1, 30, 8] (9) path: [1, 2, 9] (10) path: [1, 0, 10] (11) path: [1, 0, 11] (12) path: [1, 0, 12] (13) path: [1, 13] (14) path: [1, 13, 33, 14] (15) path: [1, 13, 33, 15] (16) path: [1, 0, 5, 16] (17) path: [1, 17] (18) path: [1, 13, 33, 18] (19) path: [1, 19] (20) path: [1, 13, 33, 20] (21) path: [1, 21] (22) path: [1, 13, 33, 22] (23) path: [1, 13, 33, 23] (24) path: [1, 0, 31, 24] (25) path: [1, 0, 31, 25] (26) path: [1, 13, 33, 26] (27) path: [1, 2, 27] (28) path: [1, 2, 28] (29) path: [1, 13, 33, 29] (30) path: [1, 30] (31) path: [1, 0, 31] (32) path: [1, 30, 32] (33) path: [1, 13, 33] ###Markdown Single Source Shortest Path (SSSP)In this notebook, we will use cuGraph to compute the shortest path from a starting vertex to everyother vertex in our training dataset.Notebook Credits* Original Authors: Bradley Rees and James Wyles* available since rerlease 0.6* Last Edit: 02/04/2020RAPIDS Versions: 0.12.0 Test Hardware* GV100 32G, CUDA 10.0 IntroductionSingle source shortest path computes the shortest paths from the given starting vertex to all other reachable vertices. To compute SSSP for a graph in cuGraph we use:**cugraph.sssp(G, source)**Input* __G__: cugraph.Graph object* __source__: int, Index of the source vertexReturns * __df__: a cudf.DataFrame object with two columns: * df['vertex']: The vertex identifier for the vertex * df['distance']: The computed distance from the source vertex to this vertex * df['predecessor']: The predecessor vertex along this paths. Allows paths to be recreated cuGraph Notice The current version of cuGraph has some limitations:* Vertex IDs need to be 32-bit integers.* Vertex IDs are expected to be contiguous integers starting from 0.cuGraph provides the renumber function to mitigate this problem. Input vertex IDs for the renumber function can be either 32-bit or 64-bit integers, can be non-contiguous, and can start from an arbitrary number. The renumber function maps the provided input vertex IDs to 32-bit contiguous integers starting from 0. cuGraph still requires the renumbered vertex IDs to be representable in 32-bit integers. These limitations are being addressed and will be fixed soon. Test DataWe will be using the Zachary Karate club dataset *W. W. Zachary, An information flow model for conflict and fission in small groups, Journal ofAnthropological Research 33, 452-473 (1977).*![Karate Club](../img/zachary_black_lines.png)This is a small graph which allows for easy visual inspection to validate results. __Note__: The Karate dataset starts with vertex ID 1 which the cuGraph analytics assume a zero-based starting ID. ###Code # Import needed libraries import cudf import cugraph ###Output _____no_output_____ ###Markdown Read the data and adjust the vertex IDs ###Code # Test file - using the clasic Karate club dataset. datafile='../data/karate-data.csv' gdf = cudf.read_csv(datafile, names=["src", "dst"], delimiter='\t', dtype=["int32", "int32"]) # Need to shift the vertex IDs to start with zero rather than one (next version of cuGraph will fix this issue) gdf["src_0"] = gdf["src"] - 1 gdf["dst_0"] = gdf["dst"] - 1 # The SSSP algorithm requires that there are weights. Just use 1.0 here (equivalent to BFS) gdf["data"] = 1.0 gdf.head() ###Output _____no_output_____ ###Markdown Create a Graph and call SSSP ###Code # create a Graph G = cugraph.Graph() G.from_cudf_edgelist(gdf, source='src_0', destination='dst_0', edge_attr='data') # Call cugraph.sssp to get the distances from vertex 1: df = cugraph.sssp(G, 1) # Print the paths # Not using the filterred dataframe to ensure that vertex IDs match row IDs for i in range(len(df)) : p = cugraph.utils.get_traversed_path_list(df, i) print(p) ###Output _____no_output_____
examples/tutorials/14_Modeling_Protein_Ligand_Interactions_With_Atomic_Convolutions.ipynb
###Markdown Tutorial Part 14: Modeling Protein-Ligand Interactions with Atomic ConvolutionsThis deepchem tutorial introduces Atomic Convolutional Model. We'll see the structure of the Atomic Conv Model and write a simple program to run Atomic Convolutions. StructureACNN’s directly exploit the local three-dimensional structure of molecules to hierarchically learn more complex chemical features by optimizing both the model and featurization simultaneously in an end-to-end fashion.The atom type convolution makes use of a neighbor-listed distance matrix to extract features encoding local chemical environments from an input representation (Cartesian atomic coordinates) that does not necessarily contain spatial locality. Following are the methods use to build ACNN architecture:- Distance MatrixThe distance matrix R is constructed from the Cartesian atomic coordinates X. It calculates distance from the distance tensor D. The distance matrix construction accepts as input a (N, 3) coordinate matrix C. This matrix is “neighbor listed” into a (N, M) matrix R.```python R = tf.reduce_sum(tf.multiply(D, D), 3) D: Distance Tensor R = tf.sqrt(R) R: Distance Matrix return R```- Atom type convolutionThe output of the atom type convolution is constructed from the distance matrix R and atomic number matrix Z. The matrix R is fed into a (1x1) filter with stride 1 and depth of Na , where Na is the number of unique atomic numbers (atom types) present in the molecular system. The atom type convolution kernel is a step function that operates on neighbor distance matrix R.- Radial Pooling layerRadial Pooling is basically a dimensionality reduction process which down-samples the output of the atom type convolutions. The reduction process prevents overfitting by providing an abstracted form of representation through feature binning, as well as reducing the number of parameters learned.Mathematically, radial pooling layers pool over tensor slices (receptive fields) of size (1xMx1) with stride 1 and a depth of Nr, where Nr is the number of desired radial filters.- Atomistic fully connected networkAtomic Conolution layers are stacked by feeding the flattened(N, Na x Nr) output of radial pooling layer into the atom type convolution operation. Finally, we feed the tensor row-wise (per-atom) into a fully-connected network. Thesame fully connected weights and biases are used for each atom in a given molecule.Now that we have seen the structural overview of ACNNs, we'll try to get deeper into the model and see how we can train it and what do we expect as the output.For the training purpose, we will use the publicly available PDBbind dataset. In this example, every row reflects a protein-ligand complex, and the following columns are present: a unique complex identifier; the SMILES string of the ligand; the binding affinity (Ki) of the ligand to the protein in the complex; a Python list of all lines in a PDB file for the protein alone; and a Python list of all lines in a ligand file for the ligand alone. ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/14_Modeling_Protein_Ligand_Interactions_With_Atomic_Convolutions.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. ###Code !curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !pip install --pre deepchem import deepchem deepchem.__version__ import deepchem as dc import os from deepchem.utils import download_url download_url("https://s3-us-west-1.amazonaws.com/deepchem.io/datasets/pdbbind_core_df.csv.gz") data_dir = os.path.join(dc.utils.get_data_dir()) dataset_file= os.path.join(dc.utils.get_data_dir(), "pdbbind_core_df.csv.gz") raw_dataset = dc.utils.load_from_disk(dataset_file) print("Type of dataset is: %s" % str(type(raw_dataset))) print(raw_dataset[:5]) #print("Shape of dataset is: %s" % str(raw_dataset.shape)) ###Output Type of dataset is: <class 'pandas.core.frame.DataFrame'> pdb_id ... label 0 2d3u ... 6.92 1 3cyx ... 8.00 2 3uo4 ... 6.52 3 1p1q ... 4.89 4 3ag9 ... 8.05 [5 rows x 7 columns] ###Markdown Training the Model Now that we've seen what our dataset looks like let's go ahead and do some python on this dataset. ###Code import numpy as np import tensorflow as tf ###Output _____no_output_____ ###Markdown Tutorial Part 14: Modeling Protein-Ligand Interactions with Atomic ConvolutionsThis deepchem tutorial introduces Atomic Convolutional Model. We'll see the structure of the Atomic Conv Model and write a simple program to run Atomic Convolutions. StructureACNN’s directly exploit the local three-dimensional structure of molecules to hierarchically learn more complex chemical features by optimizing both the model and featurization simultaneously in an end-to-end fashion.The atom type convolution makes use of a neighbor-listed distance matrix to extract features encoding local chemical environments from an input representation (Cartesian atomic coordinates) that does not necessarily contain spatial locality. Following are the methods use to build ACNN architecture:- Distance MatrixThe distance matrix R is constructed from the Cartesian atomic coordinates X. It calculates distance from the distance tensor D. The distance matrix construction accepts as input a (N, 3) coordinate matrix C. This matrix is “neighbor listed” into a (N, M) matrix R.```python R = tf.reduce_sum(tf.multiply(D, D), 3) D: Distance Tensor R = tf.sqrt(R) R: Distance Matrix return R```- Atom type convolutionThe output of the atom type convolution is constructed from the distance matrix R and atomic number matrix Z. The matrix R is fed into a (1x1) filter with stride 1 and depth of Na , where Na is the number of unique atomic numbers (atom types) present in the molecular system. The atom type convolution kernel is a step function that operates on neighbor distance matrix R.- Radial Pooling layerRadial Pooling is basically a dimensionality reduction process which down-samples the output of the atom type convolutions. The reduction process prevents overfitting by providing an abstracted form of representation through feature binning, as well as reducing the number of parameters learned.Mathematically, radial pooling layers pool over tensor slices (receptive fields) of size (1xMx1) with stride 1 and a depth of Nr, where Nr is the number of desired radial filters.- Atomistic fully connected networkAtomic Conolution layers are stacked by feeding the flattened(N, Na x Nr) output of radial pooling layer into the atom type convolution operation. Finally, we feed the tensor row-wise (per-atom) into a fully-connected network. Thesame fully connected weights and biases are used for each atom in a given molecule.Now that we have seen the structural overview of ACNNs, we'll try to get deeper into the model and see how we can train it and what do we expect as the output.For the training purpose, we will use the publicly available PDBbind dataset. In this example, every row reflects a protein-ligand complex, and the following columns are present: a unique complex identifier; the SMILES string of the ligand; the binding affinity (Ki) of the ligand to the protein in the complex; a Python list of all lines in a PDB file for the protein alone; and a Python list of all lines in a ligand file for the ligand alone. ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/14_Modeling_Protein_Ligand_Interactions_With_Atomic_Convolutions.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. ###Code !curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !pip install --pre deepchem import deepchem deepchem.__version__ import deepchem as dc import os from deepchem.utils import download_url download_url("https://s3-us-west-1.amazonaws.com/deepchem.io/datasets/pdbbind_core_df.csv.gz") data_dir = os.path.join(dc.utils.get_data_dir()) dataset_file= os.path.join(dc.utils.get_data_dir(), "pdbbind_core_df.csv.gz") raw_dataset = dc.utils.save.load_from_disk(dataset_file) print("Type of dataset is: %s" % str(type(raw_dataset))) print(raw_dataset[:5]) #print("Shape of dataset is: %s" % str(raw_dataset.shape)) ###Output Type of dataset is: <class 'pandas.core.frame.DataFrame'> pdb_id ... label 0 2d3u ... 6.92 1 3cyx ... 8.00 2 3uo4 ... 6.52 3 1p1q ... 4.89 4 3ag9 ... 8.05 [5 rows x 7 columns] ###Markdown Training the Model Now that we've seen what our dataset looks like let's go ahead and do some python on this dataset. ###Code import numpy as np import tensorflow as tf ###Output _____no_output_____ ###Markdown Tutorial Part 14: Modeling Protein-Ligand Interactions with Atomic ConvolutionsThis deepchem tutorial introduces Atomic Convolutional Model. We'll see the structure of the Atomic Conv Model and write a simple program to run Atomic Convolutions. StructureACNN’s directly exploit the local three-dimensional structure of molecules to hierarchically learn more complex chemical features by optimizing both the model and featurization simultaneously in an end-to-end fashion.The atom type convolution makes use of a neighbor-listed distance matrix to extract features encoding local chemical environments from an input representation (Cartesian atomic coordinates) that does not necessarily contain spatial locality. Following are the methods use to build ACNN architecture:- Distance MatrixThe distance matrix R is constructed from the Cartesian atomic coordinates X. It calculates distance from the distance tensor D. The distance matrix construction accepts as input a (N, 3) coordinate matrix C. This matrix is “neighbor listed” into a (N, M) matrix R.```python R = tf.reduce_sum(tf.multiply(D, D), 3) D: Distance Tensor R = tf.sqrt(R) R: Distance Matrix return R```- Atom type convolutionThe output of the atom type convolution is constructed from the distance matrix R and atomic number matrix Z. The matrix R is fed into a (1x1) filter with stride 1 and depth of Na , where Na is the number of unique atomic numbers (atom types) present in the molecular system. The atom type convolution kernel is a step function that operates on neighbor distance matrix R.- Radial Pooling layerRadial Pooling is basically a dimensionality reduction process which down-samples the output of the atom type convolutions. The reduction process prevents overfitting by providing an abstracted form of representation through feature binning, as well as reducing the number of parameters learned.Mathematically, radial pooling layers pool over tensor slices (receptive fields) of size (1xMx1) with stride 1 and a depth of Nr, where Nr is the number of desired radial filters.- Atomistic fully connected networkAtomic Conolution layers are stacked by feeding the flattened(N, Na x Nr) output of radial pooling layer into the atom type convolution operation. Finally, we feed the tensor row-wise (per-atom) into a fully-connected network. Thesame fully connected weights and biases are used for each atom in a given molecule.Now that we have seen the structural overview of ACNNs, we'll try to get deeper into the model and see how we can train it and what do we expect as the output.For the training purpose, we will use the publicly available PDBbind dataset. In this example, every row reflects a protein-ligand complex, and the following columns are present: a unique complex identifier; the SMILES string of the ligand; the binding affinity (Ki) of the ligand to the protein in the complex; a Python list of all lines in a PDB file for the protein alone; and a Python list of all lines in a ligand file for the ligand alone. ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/14_Modeling_Protein_Ligand_Interactions_With_Atomic_Convolutions.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. ###Code %tensorflow_version 1.x !curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import deepchem_installer %time deepchem_installer.install(version='2.3.0') import deepchem as dc import os from deepchem.utils import download_url download_url("https://s3-us-west-1.amazonaws.com/deepchem.io/datasets/pdbbind_core_df.csv.gz") data_dir = os.path.join(dc.utils.get_data_dir()) dataset_file= os.path.join(dc.utils.get_data_dir(), "pdbbind_core_df.csv.gz") raw_dataset = dc.utils.save.load_from_disk(dataset_file) print("Type of dataset is: %s" % str(type(raw_dataset))) print(raw_dataset[:5]) #print("Shape of dataset is: %s" % str(raw_dataset.shape)) ###Output Type of dataset is: <class 'pandas.core.frame.DataFrame'> pdb_id ... label 0 2d3u ... 6.92 1 3cyx ... 8.00 2 3uo4 ... 6.52 3 1p1q ... 4.89 4 3ag9 ... 8.05 [5 rows x 7 columns] ###Markdown Training the Model Now that we've seen what our dataset looks like let's go ahead and do some python on this dataset. ###Code import numpy as np import tensorflow as tf ###Output _____no_output_____ ###Markdown Tutorial Part 14: Modeling Protein-Ligand Interactions with Atomic ConvolutionsBy [Nathan C. Frey](https://ncfrey.github.io/) | [Twitter](https://twitter.com/nc_frey) and [Bharath Ramsundar](https://rbharath.github.io/) | [Twitter](https://twitter.com/rbhar90)This DeepChem tutorial introduces the [Atomic Convolutional Neural Network](https://arxiv.org/pdf/1703.10603.pdf). We'll see the structure of the `AtomicConvModel` and write a simple program to run Atomic Convolutions. ACNN ArchitectureACNN’s directly exploit the local three-dimensional structure of molecules to hierarchically learn more complex chemical features by optimizing both the model and featurization simultaneously in an end-to-end fashion.The atom type convolution makes use of a neighbor-listed distance matrix to extract features encoding local chemical environments from an input representation (Cartesian atomic coordinates) that does not necessarily contain spatial locality. The following methods are used to build the ACNN architecture:- __Distance Matrix__ The distance matrix $R$ is constructed from the Cartesian atomic coordinates $X$. It calculates distances from the distance tensor $D$. The distance matrix construction accepts as input a $(N, 3)$ coordinate matrix $C$. This matrix is “neighbor listed” into a $(N, M)$ matrix $R$.```python R = tf.reduce_sum(tf.multiply(D, D), 3) D: Distance Tensor R = tf.sqrt(R) R: Distance Matrix return R```- **Atom type convolution** The output of the atom type convolution is constructed from the distance matrix $R$ and atomic number matrix $Z$. The matrix $R$ is fed into a (1x1) filter with stride 1 and depth of $N_{at}$ , where $N_{at}$ is the number of unique atomic numbers (atom types) present in the molecular system. The atom type convolution kernel is a step function that operates on the neighbor distance matrix $R$.- **Radial Pooling layer** Radial Pooling is basically a dimensionality reduction process that down-samples the output of the atom type convolutions. The reduction process prevents overfitting by providing an abstracted form of representation through feature binning, as well as reducing the number of parameters learned.Mathematically, radial pooling layers pool over tensor slices (receptive fields) of size (1x$M$x1) with stride 1 and a depth of $N_r$, where $N_r$ is the number of desired radial filters and $M$ is the maximum number of neighbors.- **Atomistic fully connected network** Atomic Convolution layers are stacked by feeding the flattened ($N$, $N_{at}$ $\cdot$ $N_r$) output of the radial pooling layer into the atom type convolution operation. Finally, we feed the tensor row-wise (per-atom) into a fully-connected network. Thesame fully connected weights and biases are used for each atom in a given molecule.Now that we have seen the structural overview of ACNNs, we'll try to get deeper into the model and see how we can train it and what we expect as the output.For the training, we will use the publicly available PDBbind dataset. In this example, every row reflects a protein-ligand complex and the target is the binding affinity ($K_i$) of the ligand to the protein in the complex. ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/14_Modeling_Protein_Ligand_Interactions_With_Atomic_Convolutions.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. ###Code !curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !/root/miniconda/bin/conda install -c conda-forge mdtraj -y -q # needed for AtomicConvs !pip install --pre deepchem import deepchem deepchem.__version__ import deepchem as dc import os import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from rdkit import Chem from deepchem.molnet import load_pdbbind from deepchem.models import AtomicConvModel from deepchem.feat import AtomicConvFeaturizer ###Output _____no_output_____ ###Markdown Getting protein-ligand data If you worked through [Tutorial 13](https://github.com/deepchem/deepchem/blob/master/examples/tutorials/13_Modeling_Protein_Ligand_Interactions.ipynb) on modeling protein-ligand interactions, you'll already be familiar with how to obtain a set of data from PDBbind for training our model. Since we explored molecular complexes in detail in the [previous tutorial]((https://github.com/deepchem/deepchem/blob/master/examples/tutorials/13_Modeling_Protein_Ligand_Interactions.ipynb)), this time we'll simply initialize an `AtomicConvFeaturizer` and load the PDBbind dataset directly using MolNet. ###Code f1_num_atoms = 100 # maximum number of atoms to consider in the ligand f2_num_atoms = 1000 # maximum number of atoms to consider in the protein max_num_neighbors = 12 # maximum number of spatial neighbors for an atom acf = AtomicConvFeaturizer(frag1_num_atoms=f1_num_atoms, frag2_num_atoms=f2_num_atoms, complex_num_atoms=f1_num_atoms+f2_num_atoms, max_num_neighbors=max_num_neighbors, neighbor_cutoff=4) ###Output _____no_output_____ ###Markdown `load_pdbbind` allows us to specify if we want to use the entire protein or only the binding pocket (`pocket=True`) for featurization. Using only the pocket saves memory and speeds up the featurization. We can also use the "core" dataset of ~200 high-quality complexes for rapidly testing our model, or the larger "refined" set of nearly 5000 complexes for more datapoints and more robust training/validation. On Colab, it takes only a minute to featurize the core PDBbind set! This is pretty incredible, and it means you can quickly experiment with different featurizations and model architectures. ###Code %%time tasks, datasets, transformers = load_pdbbind(featurizer=acf, save_dir='.', data_dir='.', pocket=True, reload=False, set_name='core') datasets train, val, test = datasets ###Output _____no_output_____ ###Markdown Training the model Now that we've got our dataset, let's go ahead and initialize an `AtomicConvModel` to train. Keep the input parameters the same as those used in `AtomicConvFeaturizer`, or else we'll get errors. `layer_sizes` controls the number of layers and the size of each dense layer in the network. We choose these hyperparameters to be the same as those used in the [original paper](https://arxiv.org/pdf/1703.10603.pdf). ###Code acm = AtomicConvModel(n_tasks=1, frag1_num_atoms=f1_num_atoms, frag2_num_atoms=f2_num_atoms, complex_num_atoms=f1_num_atoms+f2_num_atoms, max_num_neighbors=max_num_neighbors, batch_size=12, layer_sizes=[32, 32, 16], learning_rate=0.003, ) losses, val_losses = [], [] %%time max_epochs = 50 for epoch in range(max_epochs): loss = acm.fit(train, nb_epoch=1, max_checkpoints_to_keep=1, all_losses=losses) metric = dc.metrics.Metric(dc.metrics.score_function.rms_score) val_losses.append(acm.evaluate(val, metrics=[metric])['rms_score']**2) # L2 Loss ###Output CPU times: user 9min 5s, sys: 1min 58s, total: 11min 4s Wall time: 11min 54s ###Markdown The loss curves are not exactly smooth, which is unsurprising because we are using 154 training and 19 validation datapoints. Increasing the dataset size may help with this, but will also require greater computational resources. ###Code f, ax = plt.subplots() ax.scatter(range(len(losses)), losses, label='train loss') ax.scatter(range(len(val_losses)), val_losses, label='val loss') plt.legend(loc='upper right'); ###Output _____no_output_____ ###Markdown The [ACNN paper](https://arxiv.org/pdf/1703.10603.pdf) showed a Pearson $R^2$ score of 0.912 and 0.448 for a random 80/20 split of the PDBbind core train/test sets. Here, we've used an 80/10/10 training/validation/test split and achieved similar performance for the training set (0.943). We can see from the performance on the training, validation, and test sets (and from the results in the paper) that the ACNN can learn chemical interactions from small training datasets, but struggles to generalize. Still, it is pretty amazing that we can train an `AtomicConvModel` with only a few lines of code and start predicting binding affinities! From here, you can experiment with different hyperparameters, more challenging splits, and the "refined" set of PDBbind to see if you can reduce overfitting and come up with a more robust model. ###Code score = dc.metrics.Metric(dc.metrics.score_function.pearson_r2_score) for tvt, ds in zip(['train', 'val', 'test'], datasets): print(tvt, acm.evaluate(ds, metrics=[score])) ###Output train {'pearson_r2_score': 0.9437584772241725} val {'pearson_r2_score': 0.16399398585969166} test {'pearson_r2_score': 0.25027177101277903} ###Markdown Tutorial Part 14: Modeling Protein-Ligand Interactions with Atomic ConvolutionsThis deepchem tutorial introduces Atomic Convolutional Model. We'll see the structure of the Atomic Conv Model and write a simple program to run Atomic Convolutions. StructureACNN’s directly exploit the local three-dimensional structure of molecules to hierarchically learn more complex chemical features by optimizing both the model and featurization simultaneously in an end-to-end fashion.The atom type convolution makes use of a neighbor-listed distance matrix to extract features encoding local chemical environments from an input representation (Cartesian atomic coordinates) that does not necessarily contain spatial locality. Following are the methods use to build ACNN architecture:- Distance MatrixThe distance matrix R is constructed from the Cartesian atomic coordinates X. It calculates distance from the distance tensor D. The distance matrix construction accepts as input a (N, 3) coordinate matrix C. This matrix is “neighbor listed” into a (N, M) matrix R.```python R = tf.reduce_sum(tf.multiply(D, D), 3) D: Distance Tensor R = tf.sqrt(R) R: Distance Matrix return R```- Atom type convolutionThe output of the atom type convolution is constructed from the distance matrix R and atomic number matrix Z. The matrix R is fed into a (1x1) filter with stride 1 and depth of Na , where Na is the number of unique atomic numbers (atom types) present in the molecular system. The atom type convolution kernel is a step function that operates on neighbor distance matrix R.- Radial Pooling layerRadial Pooling is basically a dimensionality reduction process which down-samples the output of the atom type convolutions. The reduction process prevents overfitting by providing an abstracted form of representation through feature binning, as well as reducing the number of parameters learned.Mathematically, radial pooling layers pool over tensor slices (receptive fields) of size (1xMx1) with stride 1 and a depth of Nr, where Nr is the number of desired radial filters.- Atomistic fully connected networkAtomic Conolution layers are stacked by feeding the flattened(N, Na x Nr) output of radial pooling layer into the atom type convolution operation. Finally, we feed the tensor row-wise (per-atom) into a fully-connected network. Thesame fully connected weights and biases are used for each atom in a given molecule.Now that we have seen the structural overview of ACNNs, we'll try to get deeper into the model and see how we can train it and what do we expect as the output.For the training purpose, we will use the publicly available PDBbind dataset. In this example, every row reflects a protein-ligand complex, and the following columns are present: a unique complex identifier; the SMILES string of the ligand; the binding affinity (Ki) of the ligand to the protein in the complex; a Python list of all lines in a PDB file for the protein alone; and a Python list of all lines in a ligand file for the ligand alone. ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/14_Modeling_Protein_Ligand_Interactions_With_Atomic_Convolutions.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. ###Code !wget -c https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh !chmod +x Anaconda3-2019.10-Linux-x86_64.sh !bash ./Anaconda3-2019.10-Linux-x86_64.sh -b -f -p /usr/local !conda install -y -c deepchem -c rdkit -c conda-forge -c omnia deepchem-gpu=2.3.0 import sys sys.path.append('/usr/local/lib/python3.7/site-packages/') import deepchem as dc import os from deepchem.utils import download_url download_url("https://s3-us-west-1.amazonaws.com/deepchem.io/datasets/pdbbind_core_df.csv.gz") data_dir = os.path.join(dc.utils.get_data_dir()) dataset_file= os.path.join(dc.utils.get_data_dir(), "pdbbind_core_df.csv.gz") raw_dataset = dc.utils.save.load_from_disk(dataset_file) print("Type of dataset is: %s" % str(type(raw_dataset))) print(raw_dataset[:5]) #print("Shape of dataset is: %s" % str(raw_dataset.shape)) ###Output Type of dataset is: <class 'pandas.core.frame.DataFrame'> pdb_id smiles \ 0 2d3u CC1CCCCC1S(O)(O)NC1CC(C2CCC(CN)CC2)SC1C(O)O 1 3cyx CC(C)(C)NC(O)C1CC2CCCCC2C[NH+]1CC(O)C(CC1CCCCC... 2 3uo4 OC(O)C1CCC(NC2NCCC(NC3CCCCC3C3CCCCC3)N2)CC1 3 1p1q CC1ONC(O)C1CC([NH3+])C(O)O 4 3ag9 NC(O)C(CCC[NH2+]C([NH3+])[NH3+])NC(O)C(CCC[NH2... complex_id \ 0 2d3uCC1CCCCC1S(O)(O)NC1CC(C2CCC(CN)CC2)SC1C(O)O 1 3cyxCC(C)(C)NC(O)C1CC2CCCCC2C[NH+]1CC(O)C(CC1C... 2 3uo4OC(O)C1CCC(NC2NCCC(NC3CCCCC3C3CCCCC3)N2)CC1 3 1p1qCC1ONC(O)C1CC([NH3+])C(O)O 4 3ag9NC(O)C(CCC[NH2+]C([NH3+])[NH3+])NC(O)C(CCC... protein_pdb \ 0 ['HEADER 2D3U PROTEIN\n', 'COMPND 2D3U P... 1 ['HEADER 3CYX PROTEIN\n', 'COMPND 3CYX P... 2 ['HEADER 3UO4 PROTEIN\n', 'COMPND 3UO4 P... 3 ['HEADER 1P1Q PROTEIN\n', 'COMPND 1P1Q P... 4 ['HEADER 3AG9 PROTEIN\n', 'COMPND 3AG9 P... ligand_pdb \ 0 ['COMPND 2d3u ligand \n', 'AUTHOR GENERA... 1 ['COMPND 3cyx ligand \n', 'AUTHOR GENERA... 2 ['COMPND 3uo4 ligand \n', 'AUTHOR GENERA... 3 ['COMPND 1p1q ligand \n', 'AUTHOR GENERA... 4 ['COMPND 3ag9 ligand \n', 'AUTHOR GENERA... ligand_mol2 label 0 ['### \n', '### Created by X-TOOL on Thu Aug 2... 6.92 1 ['### \n', '### Created by X-TOOL on Thu Aug 2... 8.00 2 ['### \n', '### Created by X-TOOL on Fri Aug 2... 6.52 3 ['### \n', '### Created by X-TOOL on Thu Aug 2... 4.89 4 ['### \n', '### Created by X-TOOL on Thu Aug 2... 8.05 ###Markdown Training the Model Now that we've seen what our dataset looks like let's go ahead and do some python on this dataset. ###Code import numpy as np import tensorflow as tf ###Output _____no_output_____ ###Markdown Tutorial Part 14: Modeling Protein-Ligand Interactions with Atomic ConvolutionsBy [Nathan C. Frey](https://ncfrey.github.io/) | [Twitter](https://twitter.com/nc_frey) and [Bharath Ramsundar](https://rbharath.github.io/) | [Twitter](https://twitter.com/rbhar90)This DeepChem tutorial introduces the [Atomic Convolutional Neural Network](https://arxiv.org/pdf/1703.10603.pdf). We'll see the structure of the `AtomicConvModel` and write a simple program to run Atomic Convolutions. ACNN ArchitectureACNN’s directly exploit the local three-dimensional structure of molecules to hierarchically learn more complex chemical features by optimizing both the model and featurization simultaneously in an end-to-end fashion.The atom type convolution makes use of a neighbor-listed distance matrix to extract features encoding local chemical environments from an input representation (Cartesian atomic coordinates) that does not necessarily contain spatial locality. The following methods are used to build the ACNN architecture:- __Distance Matrix__ The distance matrix $R$ is constructed from the Cartesian atomic coordinates $X$. It calculates distances from the distance tensor $D$. The distance matrix construction accepts as input a $(N, 3)$ coordinate matrix $C$. This matrix is “neighbor listed” into a $(N, M)$ matrix $R$.```python R = tf.reduce_sum(tf.multiply(D, D), 3) D: Distance Tensor R = tf.sqrt(R) R: Distance Matrix return R```- **Atom type convolution** The output of the atom type convolution is constructed from the distance matrix $R$ and atomic number matrix $Z$. The matrix $R$ is fed into a (1x1) filter with stride 1 and depth of $N_{at}$ , where $N_{at}$ is the number of unique atomic numbers (atom types) present in the molecular system. The atom type convolution kernel is a step function that operates on the neighbor distance matrix $R$.- **Radial Pooling layer** Radial Pooling is basically a dimensionality reduction process that down-samples the output of the atom type convolutions. The reduction process prevents overfitting by providing an abstracted form of representation through feature binning, as well as reducing the number of parameters learned.Mathematically, radial pooling layers pool over tensor slices (receptive fields) of size (1x$M$x1) with stride 1 and a depth of $N_r$, where $N_r$ is the number of desired radial filters and $M$ is the maximum number of neighbors.- **Atomistic fully connected network** Atomic Convolution layers are stacked by feeding the flattened ($N$, $N_{at}$ $\cdot$ $N_r$) output of the radial pooling layer into the atom type convolution operation. Finally, we feed the tensor row-wise (per-atom) into a fully-connected network. Thesame fully connected weights and biases are used for each atom in a given molecule.Now that we have seen the structural overview of ACNNs, we'll try to get deeper into the model and see how we can train it and what we expect as the output.For the training, we will use the publicly available PDBbind dataset. In this example, every row reflects a protein-ligand complex and the target is the binding affinity ($K_i$) of the ligand to the protein in the complex. ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/14_Modeling_Protein_Ligand_Interactions_With_Atomic_Convolutions.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. ###Code !curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !/root/miniconda/bin/conda install -c conda-forge mdtraj -y -q # needed for AtomicConvs !pip install --pre deepchem import deepchem deepchem.__version__ import deepchem as dc import os import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from rdkit import Chem from deepchem.molnet import load_pdbbind from deepchem.models import AtomicConvModel from deepchem.feat import AtomicConvFeaturizer ###Output _____no_output_____ ###Markdown Getting protein-ligand data If you worked through [Tutorial 13](https://github.com/deepchem/deepchem/blob/master/examples/tutorials/13_Modeling_Protein_Ligand_Interactions.ipynb) on modeling protein-ligand interactions, you'll already be familiar with how to obtain a set of data from PDBbind for training our model. Since we explored molecular complexes in detail in the [previous tutorial]((https://github.com/deepchem/deepchem/blob/master/examples/tutorials/13_Modeling_Protein_Ligand_Interactions.ipynb)), this time we'll simply initialize an `AtomicConvFeaturizer` and load the PDBbind dataset directly using MolNet. ###Code f1_num_atoms = 100 # maximum number of atoms to consider in the ligand f2_num_atoms = 1000 # maximum number of atoms to consider in the protein max_num_neighbors = 12 # maximum number of spatial neighbors for an atom acf = AtomicConvFeaturizer(frag1_num_atoms=f1_num_atoms, frag2_num_atoms=f2_num_atoms, complex_num_atoms=f1_num_atoms+f2_num_atoms, max_num_neighbors=max_num_neighbors, neighbor_cutoff=4) ###Output _____no_output_____ ###Markdown `load_pdbbind` allows us to specify if we want to use the entire protein or only the binding pocket (`pocket=True`) for featurization. Using only the pocket saves memory and speeds up the featurization. We can also use the "core" dataset of ~200 high-quality complexes for rapidly testing our model, or the larger "refined" set of nearly 5000 complexes for more datapoints and more robust training/validation. On Colab, it takes only a minute to featurize the core PDBbind set! This is pretty incredible, and it means you can quickly experiment with different featurizations and model architectures. ###Code %%time tasks, datasets, transformers = load_pdbbind(featurizer=acf, save_dir='.', data_dir='.', pocket=True, reload=False, set_name='core'}) datasets train, val, test = datasets ###Output _____no_output_____ ###Markdown Training the model Now that we've got our dataset, let's go ahead and initialize an `AtomicConvModel` to train. Keep the input parameters the same as those used in `AtomicConvFeaturizer`, or else we'll get errors. `layer_sizes` controls the number of layers and the size of each dense layer in the network. We choose these hyperparameters to be the same as those used in the [original paper](https://arxiv.org/pdf/1703.10603.pdf). ###Code acm = AtomicConvModel(n_tasks=1, frag1_num_atoms=f1_num_atoms, frag2_num_atoms=f2_num_atoms, complex_num_atoms=f1_num_atoms+f2_num_atoms, max_num_neighbors=max_num_neighbors, batch_size=12, layer_sizes=[32, 32, 16], learning_rate=0.003, ) losses, val_losses = [], [] %%time max_epochs = 50 for epoch in range(max_epochs): loss = acm.fit(train, nb_epoch=1, max_checkpoints_to_keep=1, all_losses=losses) metric = dc.metrics.Metric(dc.metrics.score_function.rms_score) val_losses.append(acm.evaluate(val, metrics=[metric])['rms_score']**2) # L2 Loss ###Output CPU times: user 9min 5s, sys: 1min 58s, total: 11min 4s Wall time: 11min 54s ###Markdown The loss curves are not exactly smooth, which is unsurprising because we are using 154 training and 19 validation datapoints. Increasing the dataset size may help with this, but will also require greater computational resources. ###Code f, ax = plt.subplots() ax.scatter(range(len(losses)), losses, label='train loss') ax.scatter(range(len(val_losses)), val_losses, label='val loss') plt.legend(loc='upper right'); ###Output _____no_output_____ ###Markdown The [ACNN paper](https://arxiv.org/pdf/1703.10603.pdf) showed a Pearson $R^2$ score of 0.912 and 0.448 for a random 80/20 split of the PDBbind core train/test sets. Here, we've used an 80/10/10 training/validation/test split and achieved similar performance for the training set (0.943). We can see from the performance on the training, validation, and test sets (and from the results in the paper) that the ACNN can learn chemical interactions from small training datasets, but struggles to generalize. Still, it is pretty amazing that we can train an `AtomicConvModel` with only a few lines of code and start predicting binding affinities! From here, you can experiment with different hyperparameters, more challenging splits, and the "refined" set of PDBbind to see if you can reduce overfitting and come up with a more robust model. ###Code score = dc.metrics.Metric(dc.metrics.score_function.pearson_r2_score) for tvt, ds in zip(['train', 'val', 'test'], datasets): print(tvt, acm.evaluate(ds, metrics=[score])) ###Output train {'pearson_r2_score': 0.9437584772241725} val {'pearson_r2_score': 0.16399398585969166} test {'pearson_r2_score': 0.25027177101277903} ###Markdown Tutorial Part 14: Modeling Protein-Ligand Interactions with Atomic ConvolutionsThis deepchem tutorial introduces Atomic Convolutional Model. We'll see the structure of the Atomic Conv Model and write a simple program to run Atomic Convolutions. StructureACNN’s directly exploit the local three-dimensional structure of molecules to hierarchically learn more complex chemical features by optimizing both the model and featurization simultaneously in an end-to-end fashion.The atom type convolution makes use of a neighbor-listed distance matrix to extract features encoding local chemical environments from an input representation (Cartesian atomic coordinates) that does not necessarily contain spatial locality. Following are the methods use to build ACNN architecture:- Distance MatrixThe distance matrix R is constructed from the Cartesian atomic coordinates X. It calculates distance from the distance tensor D. The distance matrix construction accepts as input a (N, 3) coordinate matrix C. This matrix is “neighbor listed” into a (N, M) matrix R.```python R = tf.reduce_sum(tf.multiply(D, D), 3) D: Distance Tensor R = tf.sqrt(R) R: Distance Matrix return R```- Atom type convolutionThe output of the atom type convolution is constructed from the distance matrix R and atomic number matrix Z. The matrix R is fed into a (1x1) filter with stride 1 and depth of Na , where Na is the number of unique atomic numbers (atom types) present in the molecular system. The atom type convolution kernel is a step function that operates on neighbor distance matrix R.- Radial Pooling layerRadial Pooling is basically a dimensionality reduction process which down-samples the output of the atom type convolutions. The reduction process prevents overfitting by providing an abstracted form of representation through feature binning, as well as reducing the number of parameters learned.Mathematically, radial pooling layers pool over tensor slices (receptive fields) of size (1xMx1) with stride 1 and a depth of Nr, where Nr is the number of desired radial filters.- Atomistic fully connected networkAtomic Conolution layers are stacked by feeding the flattened(N, Na x Nr) output of radial pooling layer into the atom type convolution operation. Finally, we feed the tensor row-wise (per-atom) into a fully-connected network. Thesame fully connected weights and biases are used for each atom in a given molecule.Now that we have seen the structural overview of ACNNs, we'll try to get deeper into the model and see how we can train it and what do we expect as the output.For the training purpose, we will use the publicly available PDBbind dataset. In this example, every row reflects a protein-ligand complex, and the following columns are present: a unique complex identifier; the SMILES string of the ligand; the binding affinity (Ki) of the ligand to the protein in the complex; a Python list of all lines in a PDB file for the protein alone; and a Python list of all lines in a ligand file for the ligand alone. ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/14_Modeling_Protein_Ligand_Interactions_With_Atomic_Convolutions.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. ###Code %%capture %tensorflow_version 1.x !wget -c https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh !chmod +x Miniconda3-latest-Linux-x86_64.sh !bash ./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local !conda install -y -c deepchem -c rdkit -c conda-forge -c omnia deepchem-gpu=2.3.0 import sys sys.path.append('/usr/local/lib/python3.7/site-packages/') import deepchem as dc import os from deepchem.utils import download_url download_url("https://s3-us-west-1.amazonaws.com/deepchem.io/datasets/pdbbind_core_df.csv.gz") data_dir = os.path.join(dc.utils.get_data_dir()) dataset_file= os.path.join(dc.utils.get_data_dir(), "pdbbind_core_df.csv.gz") raw_dataset = dc.utils.save.load_from_disk(dataset_file) print("Type of dataset is: %s" % str(type(raw_dataset))) print(raw_dataset[:5]) #print("Shape of dataset is: %s" % str(raw_dataset.shape)) ###Output Type of dataset is: <class 'pandas.core.frame.DataFrame'> pdb_id ... label 0 2d3u ... 6.92 1 3cyx ... 8.00 2 3uo4 ... 6.52 3 1p1q ... 4.89 4 3ag9 ... 8.05 [5 rows x 7 columns] ###Markdown Training the Model Now that we've seen what our dataset looks like let's go ahead and do some python on this dataset. ###Code import numpy as np import tensorflow as tf ###Output _____no_output_____
assignments/assignment_yourname_class4.ipynb
###Markdown T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 4 Assignment: Classification and Regression Neural Network****Student Name: Your Name** Assignment InstructionsFor this assignment you will use the **crx.csv** dataset. This is a public dataset that can be found [here](https://archive.ics.uci.edu/ml/datasets/credit+approval). You should use the CSV file on my data site, at this location: [crx.csv](https://data.heatonresearch.com/data/t81-558/crx.csv) because it includes column headers. This is a dataset that is usually used for binary classification. There are 15 attributes, plus a target column that contains only + or -. Some of the columns have missing values.For this assignment you will train a neural network and return the predictions. You will submit these predictions to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Complete the following tasks:* Your task is to replace missing values in columns *a2* and *a14* with values estimated by a neural network (one neural network for *a2* and another for *a14*).* Your submission file will contain the same headers as the source CSV: *a1*, *a2*, *s3*, *a4*, *a5*, *a6*, *a7*, *a8*, *a9*, *a10*, *a11*, *a12*, *a13*, *a14*, *a15*, and *a16*.* You should only need to modify *a2* and *a14*.* Neural networks can be much more powerful at filling missing variables than median and mean.* Train two neural networks to predict *a2* and *a14*. * The y (target) for training the two nets will be *a2* and *a14*, depending on which you are trying to fill.* The x for training the two nets will be 's3','a8','a9','a10','a11','a12','a13','a15'. These are chosen because it is important not to use any columns with missing values, also it could cause unwanted bias if we include the ultimate target (*a16*).* ONLY predict new values for missing values in *a2* and *a14*.* You will likely get this small warning: Warning: The mean of column a14 differs from the solution file by 0.20238937709643778. (might not matter if small) Assignment Submit FunctionYou will submit the 10 programming assignments electronically. The following submit function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any basic problems. **It is unlikely that should need to modify this function.** ###Code import base64 import os import numpy as np import pandas as pd import requests # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - Pandas dataframe output. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) r = requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"), 'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code == 200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ###Output _____no_output_____ ###Markdown Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to /content/drive. ###Code from google.colab import drive drive.mount('/content/drive') !ls /content/drive/My\ Drive/Colab\ Notebooks ###Output _____no_output_____ ###Markdown Assignment 4 Sample CodeThe following code provides a starting point for this assignment. ###Code import os import pandas as pd from scipy.stats import zscore from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation import pandas as pd import io import requests import numpy as np from sklearn import metrics # This is your student key that I emailed to you at the beginnning of the semester. key = "PPboscDL2M94HCbkbvfOLakXXNy3dh5x2VV1Mlpm" # This is an example key and will not work. # You must also identify your source file. (modify for your local setup) # file='/content/drive/My Drive/Colab Notebooks/assignment_yourname_class3.ipynb' # Google CoLab # file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class3.ipynb' # Windows file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class3.ipynb' # Mac/Linux # Begin assignment df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/crx.csv",na_values=['?']) submit(source_file=file,data=df_submit,key=key,no=4) # Below is just a suggestion. These are the imports that I used. from scipy.stats import zscore from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping def fill_missing_numeric(df,current,target): # Fill in as needed return None df_submit = fill_missing_numeric(df,'a2','a16') df_submit = fill_missing_numeric(df,'a14','a16') # Submit submit(source_file=file,data=df_submit,key=key,no=4) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 4 Assignment: Regression Neural Network****Student Name: Your Name** Assignment InstructionsFor this assignment you will use the **reg-33-spring-2019.csv** dataset. This is a dataset that I generated specifically for this semester. You can find the CSV file on my data site, at this location: [reg-33-spring-2019.csv](http://data.heatonresearch.com/data/t81-558/datasets/reg-33-spring-2019.csv).For this assignment you will train a neural network and return the predictions. You will submit these predictions to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Complete the following tasks:* Normalize all numeric to zscores and all text/categorical to dummies. Do not normalize the *target*.* If you find any missing values (NA's), replace them with the median values for that column.* No need for any cross validation or holdout. Just train on the entire data set for 500 epochs.* You might get a warning, such as **"Warning: The mean of column pred differs from the solution file by 2.39"**. Unless this value is several hundred, do not worry about it. I used a neural network with layer sizes of (200, 100, 50) and got a RMSE of around 600, with a result of **Warning: The mean of column pred differs from the solution file by 89.07342078982037.** More epochs would likely improve this further, how low can you get it?* Your submission should contain the id (column name *id*), your prediction (column name *pred"), the expected value (from the **reg-33-spring-2019.csv** dataset, named *y*, and the absolute value of the difference between the expected and predicted (column name *diff*)* Your submitted dataframe will have these columns: id, pred. Helpful FunctionsYou will see these at the top of every module and assignment. These are simply a set of reusable functions that we will make use of. Each of them will be explained as the semester progresses. They are explained in greater detail as the course progresses. Class 4 contains a complete overview of these functions. ###Code import base64 import os import matplotlib.pyplot as plt import numpy as np import pandas as pd import requests from sklearn import preprocessing # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1 # at every location where the original column (name) matches each of the target_values. One column is added for # each target value. def encode_text_single_dummy(df, name, target_values): for tv in target_values: l = list(df[name].astype(str)) l = [1 if str(x) == str(tv) else 0 for x in l] name2 = f"{name}-{tv}" df[name2] = l # Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue). def encode_text_index(df, name): le = preprocessing.LabelEncoder() df[name] = le.fit_transform(df[name]) return le.classes_ # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Convert all missing values in the specified column to the median def missing_median(df, name): med = df[name].median() df[name] = df[name].fillna(med) # Convert all missing values in the specified column to the default def missing_default(df, name, default_value): df[name] = df[name].fillna(default_value) # Convert a Pandas dataframe to the x,y inputs that TensorFlow needs def to_xy(df, target): result = [] for x in df.columns: if x != target: result.append(x) # find out the type of the target column. Is it really this hard? :( target_type = df[target].dtypes target_type = target_type[0] if hasattr( target_type, '__iter__') else target_type # Encode to int for classification, float otherwise. TensorFlow likes 32 bits. if target_type in (np.int64, np.int32): # Classification dummies = pd.get_dummies(df[target]) return df[result].values.astype(np.float32), dummies.values.astype(np.float32) # Regression return df[result].values.astype(np.float32), df[[target]].values.astype(np.float32) # Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return f"{h}:{m:>02}:{s:>05.2f}" # Regression chart. def chart_regression(pred, y, sort=True): t = pd.DataFrame({'pred': pred, 'y': y.flatten()}) if sort: t.sort_values(by=['y'], inplace=True) plt.plot(t['y'].tolist(), label='expected') plt.plot(t['pred'].tolist(), label='prediction') plt.ylabel('output') plt.legend() plt.show() # Remove all rows where the specified column is +/- sd standard deviations def remove_outliers(df, name, sd): drop_rows = df.index[(np.abs(df[name] - df[name].mean()) >= (sd * df[name].std()))] df.drop(drop_rows, axis=0, inplace=True) # Encode a column to a range between normalized_low and normalized_high. def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1, data_low=None, data_high=None): if data_low is None: data_low = min(df[name]) data_high = max(df[name]) df[name] = ((df[name] - data_low) / (data_high - data_low)) \ * (normalized_high - normalized_low) + normalized_low # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - Pandas dataframe output. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) r = requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"), 'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code == 200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ###Output _____no_output_____ ###Markdown Assignment 4 Sample CodeThe following code provides a starting point for this assignment. ###Code import os import pandas as pd from scipy.stats import zscore from keras.models import Sequential from keras.layers.core import Dense, Activation import pandas as pd import io import requests import numpy as np from sklearn import metrics # This is your student key that I emailed to you at the beginnning of the semester. key = "ivYj3b2yJY2dvQ9MEQMLe5ECGenGc82p4dywJxtQ" # This is an example key and will not work. # You must also identify your source file. (modify for your local setup) # file='/resources/t81_558_deep_learning/assignment_yourname_class1.ipynb' # IBM Data Science Workbench # file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\t81_558_class1_intro_python.ipynb' # Windows file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class4.ipynb' # Mac/Linux #file = "C:\\Users\\jeffh\\Dropbox\\school\\teaching\\wustl\\classes\\T81_558_deep_learning\\solutions\\assignment_solution_class4.ipynb" # Begin assignment path = "./data/" filename_read = os.path.join(path,"reg-33-spring-2019.csv") df = pd.read_csv(filename_read) # Add assignment code here submit(source_file=file,data=submit_df,key=key,no=4) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 4 Assignment: Classification and Regression Neural Network****Student Name: Your Name** Assignment InstructionsFor this assignment, you will use the **crx.csv** dataset. This dataset is a public dataset that can you can find [here](https://archive.ics.uci.edu/ml/datasets/credit+approval). You should use the CSV file on my data site, at this location: [crx.csv](https://data.heatonresearch.com/data/t81-558/crx.csv) because it includes column headers. The primary use for this dataset is binary classification. There are 15 attributes, plus a target column that contains only + or -. Some of the columns have missing values.You should train a neural network and return the predictions. You will submit these predictions to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Complete the following tasks:* Your task is to replace missing values in columns *a2* and *a14* with values estimated by a neural network (one neural network for *a2* and another for *a14*).* Your submission file will contain the same headers as the source CSV: *a1*, *a2*, *s3*, *a4*, *a5*, *a6*, *a7*, *a8*, *a9*, *a10*, *a11*, *a12*, *a13*, *a14*, *a15*, and *a16*.* You should only need to modify *a2* and *a14*.* Neural networks can be much more powerful at filling missing variables than median and mean.* Train two neural networks to predict *a2* and *a14*. * The *y* (target) for training the two nets will be *a2* and *a14*, depending on which you are trying to fill.* The x for training the two nets will be 's3','a8','a9','a10','a11','a12','a13','a15'. These are chosen because it is important not to use any columns with missing values; also, it could cause unwanted bias if we include the ultimate target (*a16*).* ONLY predict new values for missing values in *a2* and *a14*.* You will likely get this small warning: Warning: The mean of column a14 differs from the solution file by 0.20238937709643778. (might not matter if small) Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to ```/content/drive```. ###Code try: from google.colab import drive drive.mount('/content/drive', force_remount=True) COLAB = True print("Note: using Google CoLab") %tensorflow_version 2.x except: print("Note: not using Google CoLab") COLAB = False ###Output _____no_output_____ ###Markdown Assignment Submit FunctionYou will submit the ten programming assignments electronically. The following **submit** function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any underlying problems. **It is unlikely that should need to modify this function.** ###Code import base64 import os import numpy as np import pandas as pd import requests import PIL import PIL.Image import io # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - List of pandas dataframes or images. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) payload = [] for item in data: if type(item) is PIL.Image.Image: buffered = BytesIO() item.save(buffered, format="PNG") payload.append({'PNG':base64.b64encode(buffered.getvalue()).decode('ascii')}) elif type(item) is pd.core.frame.DataFrame: payload.append({'CSV':base64.b64encode(item.to_csv(index=False).encode('ascii')).decode("ascii")}) r= requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={ 'payload': payload,'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code==200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ###Output _____no_output_____ ###Markdown Assignment 4 Sample CodeThe following code provides a starting point for this assignment. ###Code import os import pandas as pd from scipy.stats import zscore from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation import pandas as pd import io import requests import numpy as np from sklearn import metrics # This is your student key that I emailed to you at the beginnning of the semester. key = "Gx5en9cEVvaZnjut6vfLm1HG4ZO4PsI32sgldAXj" # This is an example key and will not work. # You must also identify your source file. (modify for your local setup) # file='/content/drive/My Drive/Colab Notebooks/assignment_yourname_class4.ipynb' # Google CoLab # file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class4.ipynb' # Windows file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class4.ipynb' # Mac/Linux # Begin assignment df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/crx.csv",na_values=['?']) # Below is just a suggestion. These are the imports that I used. from scipy.stats import zscore from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping def fill_missing_numeric(df,current,target): # Fill in as needed return None df_submit = fill_missing_numeric(df,'a2','a16') df_submit = fill_missing_numeric(df,'a14','a16') # Submit submit(source_file=file,data=[df_submit],key=key,no=4) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 4 Assignment: Classification and Regression Neural Network****Student Name: Your Name** Assignment InstructionsFor this assignment, you will use the **crx.csv** dataset. This dataset is a public dataset that can you can find [here](https://archive.ics.uci.edu/ml/datasets/credit+approval). You should use the CSV file on my data site, at this location: [crx.csv](https://data.heatonresearch.com/data/t81-558/crx.csv) because it includes column headers. The primary use for this dataset is binary classification. There are 15 attributes, plus a target column that contains only + or -. Some of the columns have missing values.You should train a neural network and return the predictions. You will submit these predictions to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Complete the following tasks:* Your task is to replace missing values in columns *a2* and *a14* with values estimated by a neural network (one neural network for *a2* and another for *a14*).* Your submission file will contain the same headers as the source CSV: *a1*, *a2*, *s3*, *a4*, *a5*, *a6*, *a7*, *a8*, *a9*, *a10*, *a11*, *a12*, *a13*, *a14*, *a15*, and *a16*.* You should only need to modify *a2* and *a14*.* Neural networks can be much more powerful at filling missing variables than median and mean.* Train two neural networks to predict *a2* and *a14*. * The *y* (target) for training the two nets will be *a2* and *a14*, depending on which you are trying to fill.* The x for training the two nets will be 's3','a8','a9','a10','a11','a12','a13','a15'. These are chosen because it is important not to use any columns with missing values; also, it could cause unwanted bias if we include the ultimate target (*a16*).* ONLY predict new values for missing values in *a2* and *a14*.* You will likely get this small warning: Warning: The mean of column a14 differs from the solution file by 0.20238937709643778. (might not matter if small) Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to ```/content/drive```. ###Code try: from google.colab import drive drive.mount('/content/drive', force_remount=True) COLAB = True print("Note: using Google CoLab") %tensorflow_version 2.x except: print("Note: not using Google CoLab") COLAB = False ###Output _____no_output_____ ###Markdown Assignment Submit FunctionYou will submit the ten programming assignments electronically. The following **submit** function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any underlying problems. **It is unlikely that should need to modify this function.** ###Code import base64 import os import numpy as np import pandas as pd import requests import PIL import PIL.Image import io # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - List of pandas dataframes or images. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) payload = [] for item in data: if type(item) is PIL.Image.Image: buffered = BytesIO() item.save(buffered, format="PNG") payload.append({'PNG':base64.b64encode(buffered.getvalue()).decode('ascii')}) elif type(item) is pd.core.frame.DataFrame: payload.append({'CSV':base64.b64encode(item.to_csv(index=False).encode('ascii')).decode("ascii")}) r= requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={ 'payload': payload,'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code==200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ###Output _____no_output_____ ###Markdown Assignment 4 Sample CodeThe following code provides a starting point for this assignment. ###Code import os import pandas as pd from scipy.stats import zscore from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation import pandas as pd import io import requests import numpy as np from sklearn import metrics # This is your student key that I emailed to you at the beginnning of the semester. key = "Gx5en9cEVvaZnjut6vfLm1HG4ZO4PsI32sgldAXj" # This is an example key and will not work. # You must also identify your source file. (modify for your local setup) # file='/content/drive/My Drive/Colab Notebooks/assignment_yourname_class4.ipynb' # Google CoLab # file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class4.ipynb' # Windows file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class4.ipynb' # Mac/Linux # Begin assignment df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/crx.csv",na_values=['?']) # Below is just a suggestion. These are the imports that I used. from scipy.stats import zscore from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping def fill_missing_numeric(df,current,target): # Fill in as needed return None df_submit = fill_missing_numeric(df,'a2','a16') df_submit = fill_missing_numeric(df_submit,'a14','a16') # Submit submit(source_file=file,data=[df_submit],key=key,no=4) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 4 Assignment: Regression Neural Network****Student Name: Your Name** Assignment InstructionsFor this assignment you will use the **reg-30-spring-2018.csv** dataset. This is a dataset that I generated specifically for this semester. You can find the CSV file in the **data** directory of the class GitHub repository here: [reg-30-spring-2018.csv](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/data/reg-30-spring-2018.csv).For this assignment you will train a neural network and return the predictions. You will submit these predictions to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Complete the following tasks:* Normalize all numeric to zscores and all text/categorical to dummies. Do not normalize the *target*.* Your target (y) is the filed named *target*.* If you find any missing values (NA's), replace them with the median values for that column.* No need for any cross validation or holdout. Just train on the entire data set for 250 epochs.* You might get a warning, such as **"Warning: The mean of column pred differs from the solution file by 2.39"**. Do not worry about small values, it would be very hard to get exactly the same result as I did.* Your submitted dataframe will have these columns: id, pred. Helpful FunctionsYou will see these at the top of every module and assignment. These are simply a set of reusable functions that we will make use of. Each of them will be explained as the semester progresses. They are explained in greater detail as the course progresses. Class 4 contains a complete overview of these functions. ###Code from sklearn import preprocessing import matplotlib.pyplot as plt import numpy as np import pandas as pd import shutil import os import requests import base64 # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = "{}-{}".format(name, x) df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1 # at every location where the original column (name) matches each of the target_values. One column is added for # each target value. def encode_text_single_dummy(df, name, target_values): for tv in target_values: l = list(df[name].astype(str)) l = [1 if str(x) == str(tv) else 0 for x in l] name2 = "{}-{}".format(name, tv) df[name2] = l # Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue). def encode_text_index(df, name): le = preprocessing.LabelEncoder() df[name] = le.fit_transform(df[name]) return le.classes_ # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Convert all missing values in the specified column to the median def missing_median(df, name): med = df[name].median() df[name] = df[name].fillna(med) # Convert all missing values in the specified column to the default def missing_default(df, name, default_value): df[name] = df[name].fillna(default_value) # Convert a Pandas dataframe to the x,y inputs that TensorFlow needs def to_xy(df, target): result = [] for x in df.columns: if x != target: result.append(x) # find out the type of the target column. Is it really this hard? :( target_type = df[target].dtypes target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type # Encode to int for classification, float otherwise. TensorFlow likes 32 bits. if target_type in (np.int64, np.int32): # Classification dummies = pd.get_dummies(df[target]) return df.as_matrix(result).astype(np.float32), dummies.as_matrix().astype(np.float32) else: # Regression return df.as_matrix(result).astype(np.float32), df.as_matrix([target]).astype(np.float32) # Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return "{}:{:>02}:{:>05.2f}".format(h, m, s) # Regression chart. def chart_regression(pred,y,sort=True): t = pd.DataFrame({'pred' : pred, 'y' : y.flatten()}) if sort: t.sort_values(by=['y'],inplace=True) a = plt.plot(t['y'].tolist(),label='expected') b = plt.plot(t['pred'].tolist(),label='prediction') plt.ylabel('output') plt.legend() plt.show() # Remove all rows where the specified column is +/- sd standard deviations def remove_outliers(df, name, sd): drop_rows = df.index[(np.abs(df[name] - df[name].mean()) >= (sd * df[name].std()))] df.drop(drop_rows, axis=0, inplace=True) # Encode a column to a range between normalized_low and normalized_high. def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1, data_low=None, data_high=None): if data_low is None: data_low = min(df[name]) data_high = max(df[name]) df[name] = ((df[name] - data_low) / (data_high - data_low)) \ * (normalized_high - normalized_low) + normalized_low # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - Pandas dataframe output. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) r = requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"), 'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code == 200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ###Output _____no_output_____ ###Markdown Assignment 4 Sample CodeThe following code provides a starting point for this assignment. ###Code import os import pandas as pd from scipy.stats import zscore from keras.models import Sequential from keras.layers.core import Dense, Activation import pandas as pd import io import requests import numpy as np from sklearn import metrics # This is your student key that I emailed to you at the beginnning of the semester. key = "qgABjW9GKV1vvFSQNxZW9akByENTpTAo2T9qOjmh" # This is an example key and will not work. # You must also identify your source file. (modify for your local setup) # file='/resources/t81_558_deep_learning/assignment_yourname_class1.ipynb' # IBM Data Science Workbench # file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\t81_558_class1_intro_python.ipynb' # Windows # file='/Users/jeff/projects/t81_558_deep_learning/assignment_yourname_class1.ipynb' # Mac/Linux file = '...location of your source file...' # Begin assignment path = "./data/" filename_read = os.path.join(path,"reg-30-spring-2018.csv") df = pd.read_csv(filename_read) # Encode the feature vector ids = df['id'] # Save a copy, if you like submit_df.to_csv('4.csv',index=False) # Submit the assignment submit(source_file=file,data=submit_df,key=key,no=4) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 4 Assignment: Regression Neural Network****Student Name: Your Name** Assignment InstructionsFor this assignment you will use the **reg-30-spring-2018.csv** dataset. This is a dataset that I generated specifically for this semester. You can find the CSV file in the **data** directory of the class GitHub repository here: [reg-30-spring-2018.csv](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/data/reg-30-spring-2018.csv).For this assignment you will train a neural network and return the predictions. You will submit these predictions to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Complete the following tasks:* Normalize all numeric to zscores and all text/categorical to dummies. Do not normalize the *target*.* Your target (y) is the filed named *target*.* If you find any missing values (NA's), replace them with the median values for that column.* No need for any cross validation or holdout. Just train on the entire data set for 250 steps.* You might get a warning, such as **"Warning: The mean of column pred differs from the solution file by 2.39"**. Do not worry about small values, it would be very hard to get exactly the same result as I did.* Your submitted dataframe will have these columns: id, pred. Helpful FunctionsYou will see these at the top of every module and assignment. These are simply a set of reusable functions that we will make use of. Each of them will be explained as the semester progresses. They are explained in greater detail as the course progresses. Class 4 contains a complete overview of these functions. ###Code from sklearn import preprocessing import matplotlib.pyplot as plt import numpy as np import pandas as pd import shutil import os import requests import base64 # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = "{}-{}".format(name, x) df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1 # at every location where the original column (name) matches each of the target_values. One column is added for # each target value. def encode_text_single_dummy(df, name, target_values): for tv in target_values: l = list(df[name].astype(str)) l = [1 if str(x) == str(tv) else 0 for x in l] name2 = "{}-{}".format(name, tv) df[name2] = l # Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue). def encode_text_index(df, name): le = preprocessing.LabelEncoder() df[name] = le.fit_transform(df[name]) return le.classes_ # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Convert all missing values in the specified column to the median def missing_median(df, name): med = df[name].median() df[name] = df[name].fillna(med) # Convert all missing values in the specified column to the default def missing_default(df, name, default_value): df[name] = df[name].fillna(default_value) # Convert a Pandas dataframe to the x,y inputs that TensorFlow needs def to_xy(df, target): result = [] for x in df.columns: if x != target: result.append(x) # find out the type of the target column. Is it really this hard? :( target_type = df[target].dtypes target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type # Encode to int for classification, float otherwise. TensorFlow likes 32 bits. if target_type in (np.int64, np.int32): # Classification dummies = pd.get_dummies(df[target]) return df.as_matrix(result).astype(np.float32), dummies.as_matrix().astype(np.float32) else: # Regression return df.as_matrix(result).astype(np.float32), df.as_matrix([target]).astype(np.float32) # Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return "{}:{:>02}:{:>05.2f}".format(h, m, s) # Regression chart. def chart_regression(pred,y,sort=True): t = pd.DataFrame({'pred' : pred, 'y' : y.flatten()}) if sort: t.sort_values(by=['y'],inplace=True) a = plt.plot(t['y'].tolist(),label='expected') b = plt.plot(t['pred'].tolist(),label='prediction') plt.ylabel('output') plt.legend() plt.show() # Remove all rows where the specified column is +/- sd standard deviations def remove_outliers(df, name, sd): drop_rows = df.index[(np.abs(df[name] - df[name].mean()) >= (sd * df[name].std()))] df.drop(drop_rows, axis=0, inplace=True) # Encode a column to a range between normalized_low and normalized_high. def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1, data_low=None, data_high=None): if data_low is None: data_low = min(df[name]) data_high = max(df[name]) df[name] = ((df[name] - data_low) / (data_high - data_low)) \ * (normalized_high - normalized_low) + normalized_low # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - Pandas dataframe output. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) r = requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"), 'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code == 200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ###Output _____no_output_____ ###Markdown Assignment 4 Sample CodeThe following code provides a starting point for this assignment. ###Code import os import pandas as pd from scipy.stats import zscore from keras.models import Sequential from keras.layers.core import Dense, Activation import pandas as pd import io import requests import numpy as np from sklearn import metrics # This is your student key that I emailed to you at the beginnning of the semester. key = "qgABjW9GKV1vvFSQNxZW9akByENTpTAo2T9qOjmh" # This is an example key and will not work. # You must also identify your source file. (modify for your local setup) # file='/resources/t81_558_deep_learning/assignment_yourname_class1.ipynb' # IBM Data Science Workbench # file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\t81_558_class1_intro_python.ipynb' # Windows # file='/Users/jeff/projects/t81_558_deep_learning/assignment_yourname_class1.ipynb' # Mac/Linux file = '...location of your source file...' # Begin assignment path = "./data/" filename_read = os.path.join(path,"reg-30-spring-2018.csv") df = pd.read_csv(filename_read) # Encode the feature vector ids = df['id'] # Save a copy, if you like submit_df.to_csv('4.csv',index=False) # Submit the assignment submit(source_file=file,data=submit_df,key=key,no=4) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 4 Assignment: Classification and Regression Neural Network****Student Name: Your Name** Assignment InstructionsFor this assignment you will use the **crx.csv** dataset. This is a public dataset that can be found [here](https://archive.ics.uci.edu/ml/datasets/credit+approval). You should use the CSV file on my data site, at this location: [crx.csv](https://data.heatonresearch.com/data/t81-558/crx.csv) because it includes column headers. This is a dataset that is usually used for binary classification. There are 15 attributes, plus a target column that contains only + or -. Some of the columns have missing values.For this assignment you will train a neural network and return the predictions. You will submit these predictions to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Complete the following tasks:* Your task is to replace missing values in columns *a2* and *a14* with values estimated by a neural network.* Your submission file will contrain the same headers as the source CSV: *a1*, *a2*, *s3*, *a4*, *a5*, *a6*, *a7*, *a8*, *a9*, *a10*, *a11*, *a12*, *a13*, *a14*, *a15*, and *a16*.* You should only need to modify *a2* and *a14*.* Neural networks can be much more powerful at filling missing variables than median and mean.* Train two neural networks to predict *a2* and *a14*. * The y for training the two nets will be *a2* and *a14*, depending on which you are trying to fill.* The x for training the two nets will be 's3','a8','a9','a10','a11','a12','a13','a15'. These are chosen because it is important not to use any columns with missing values, also it could cause unwanted bias if we include the ultimate target (*a15*).* ONLY predict new values for missing values in *a2* and *a14*.* You will likely get this small warning: Warning: The mean of column a14 differs from the solution file by 0.20238937709643778. (might not matter if small) Assignment Submit FunctionYou will submit the 10 programming assignments electronically. The following submit function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any basic problems. **It is unlikely that should need to modify this function.** ###Code import base64 import os import numpy as np import pandas as pd import requests # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - Pandas dataframe output. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) r = requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"), 'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code == 200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ###Output _____no_output_____ ###Markdown Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to /content/drive. ###Code from google.colab import drive drive.mount('/content/drive') !ls /content/drive/My\ Drive/Colab\ Notebooks ###Output _____no_output_____ ###Markdown Assignment 4 Sample CodeThe following code provides a starting point for this assignment. ###Code import os import pandas as pd from scipy.stats import zscore from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation import pandas as pd import io import requests import numpy as np from sklearn import metrics # This is your student key that I emailed to you at the beginnning of the semester. key = "PPboscDL2M94HCbkbvfOLakXXNy3dh5x2VV1Mlpm" # This is an example key and will not work. # You must also identify your source file. (modify for your local setup) # file='/resources/t81_558_deep_learning/assignment_yourname_class1.ipynb' # IBM Data Science Workbench # file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\t81_558_class1_intro_python.ipynb' # Windows file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class4.ipynb' # Mac/Linux #file = "C:\\Users\\jeffh\\Dropbox\\school\\teaching\\wustl\\classes\\T81_558_deep_learning\\solutions\\assignment_solution_class4.ipynb" # Begin assignment df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/crx.csv",na_values=['?']) submit(source_file=file,data=df_submit,key=key,no=4) # Below is just a suggestion. These are the imports that I used. from scipy.stats import zscore from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping def fill_missing_numeric(df,current,target): # Fill in as needed return None df_submit = fill_missing_numeric(df,'a2','a16') df_submit = fill_missing_numeric(df,'a14','a16') # Submit submit(source_file=file,data=df_submit,key=key,no=4) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 4 Assignment: Regression Neural Network****Student Name: Your Name** Assignment InstructionsFor this assignment you will use the **reg-30-spring-2018.csv** dataset. This is a dataset that I generated specifically for this semester. You can find the CSV file in the **data** directory of the class GitHub repository here: [reg-30-spring-2018.csv](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/data/reg-30-spring-2018.csv).For this assignment you will train a neural network and return the predictions. You will submit these predictions to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Complete the following tasks:* Normalize all numeric to zscores and all text/categorical to dummies. Do not normalize the *target*.* If you find any missing values (NA's), replace them with the median values for that column.* No need for any cross validation or holdout. Just train on the entire data set for 250 steps.* You might get a warning, such as **"Warning: The mean of column pred differs from the solution file by 2.39"**. Do not worry about small values, it would be very hard to get exactly the same result as I did.* Your submission should contain the id (column name *id*), your prediction (column name *pred"), the expected value (from the **reg-30-spring-2018.csv** dataset, named *y*, and the absolute value of the difference between the expected and predicted (column name *diff*)* Your submitted dataframe will have these columns: id, pred. Helpful FunctionsYou will see these at the top of every module and assignment. These are simply a set of reusable functions that we will make use of. Each of them will be explained as the semester progresses. They are explained in greater detail as the course progresses. Class 4 contains a complete overview of these functions. ###Code from sklearn import preprocessing import matplotlib.pyplot as plt import numpy as np import pandas as pd import shutil import os import requests import base64 # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = "{}-{}".format(name, x) df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1 # at every location where the original column (name) matches each of the target_values. One column is added for # each target value. def encode_text_single_dummy(df, name, target_values): for tv in target_values: l = list(df[name].astype(str)) l = [1 if str(x) == str(tv) else 0 for x in l] name2 = "{}-{}".format(name, tv) df[name2] = l # Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue). def encode_text_index(df, name): le = preprocessing.LabelEncoder() df[name] = le.fit_transform(df[name]) return le.classes_ # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Convert all missing values in the specified column to the median def missing_median(df, name): med = df[name].median() df[name] = df[name].fillna(med) # Convert all missing values in the specified column to the default def missing_default(df, name, default_value): df[name] = df[name].fillna(default_value) # Convert a Pandas dataframe to the x,y inputs that TensorFlow needs def to_xy(df, target): result = [] for x in df.columns: if x != target: result.append(x) # find out the type of the target column. Is it really this hard? :( target_type = df[target].dtypes target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type # Encode to int for classification, float otherwise. TensorFlow likes 32 bits. if target_type in (np.int64, np.int32): # Classification dummies = pd.get_dummies(df[target]) return df.as_matrix(result).astype(np.float32), dummies.as_matrix().astype(np.float32) else: # Regression return df.as_matrix(result).astype(np.float32), df.as_matrix([target]).astype(np.float32) # Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return "{}:{:>02}:{:>05.2f}".format(h, m, s) # Regression chart. def chart_regression(pred,y,sort=True): t = pd.DataFrame({'pred' : pred, 'y' : y.flatten()}) if sort: t.sort_values(by=['y'],inplace=True) a = plt.plot(t['y'].tolist(),label='expected') b = plt.plot(t['pred'].tolist(),label='prediction') plt.ylabel('output') plt.legend() plt.show() # Remove all rows where the specified column is +/- sd standard deviations def remove_outliers(df, name, sd): drop_rows = df.index[(np.abs(df[name] - df[name].mean()) >= (sd * df[name].std()))] df.drop(drop_rows, axis=0, inplace=True) # Encode a column to a range between normalized_low and normalized_high. def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1, data_low=None, data_high=None): if data_low is None: data_low = min(df[name]) data_high = max(df[name]) df[name] = ((df[name] - data_low) / (data_high - data_low)) \ * (normalized_high - normalized_low) + normalized_low # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - Pandas dataframe output. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) r = requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"), 'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code == 200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ###Output _____no_output_____ ###Markdown Assignment 4 Sample CodeThe following code provides a starting point for this assignment. ###Code import os import pandas as pd from scipy.stats import zscore from keras.models import Sequential from keras.layers.core import Dense, Activation import pandas as pd import io import requests import numpy as np from sklearn import metrics # This is your student key that I emailed to you at the beginnning of the semester. key = "qgABjW9GKV1vvFSQNxZW9akByENTpTAo2T9qOjmh" # This is an example key and will not work. # You must also identify your source file. (modify for your local setup) # file='/resources/t81_558_deep_learning/assignment_yourname_class1.ipynb' # IBM Data Science Workbench # file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\t81_558_class1_intro_python.ipynb' # Windows # file='/Users/jeff/projects/t81_558_deep_learning/assignment_yourname_class1.ipynb' # Mac/Linux file = '...location of your source file...' # Begin assignment path = "./data/" filename_read = os.path.join(path,"reg-30-spring-2018.csv") df = pd.read_csv(filename_read) # Encode the feature vector ids = df['id'] # Save a copy, if you like submit_df.to_csv('4.csv',index=False) # Submit the assignment submit(source_file=file,data=submit_df,key=key,no=4) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 4 Assignment: Classification and Regression Neural Network****Student Name: Your Name** Assignment InstructionsFor this assignment, you will use the **crx.csv** dataset. This dataset is a public dataset that can you can find [here](https://archive.ics.uci.edu/ml/datasets/credit+approval). You should use the CSV file on my data site, at this location: [crx.csv](https://data.heatonresearch.com/data/t81-558/crx.csv) because it includes column headers. The primary use for this dataset is binary classification. There are 15 attributes, plus a target column that contains only + or -. Some of the columns have missing values.You should train a neural network and return the predictions. You will submit these predictions to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Complete the following tasks:* Your task is to replace missing values in columns *a2* and *a14* with values estimated by a neural network (one neural network for *a2* and another for *a14*).* Your submission file will contain the same headers as the source CSV: *a1*, *a2*, *s3*, *a4*, *a5*, *a6*, *a7*, *a8*, *a9*, *a10*, *a11*, *a12*, *a13*, *a14*, *a15*, and *a16*.* You should only need to modify *a2* and *a14*.* Neural networks can be much more powerful at filling missing variables than median and mean.* Train two neural networks to predict *a2* and *a14*. * The *y* (target) for training the two nets will be *a2* and *a14*, depending on which you are trying to fill.* The x for training the two nets will be 's3','a8','a9','a10','a11','a12','a13','a15'. These are chosen because it is important not to use any columns with missing values; also, it could cause unwanted bias if we include the ultimate target (*a16*).* ONLY predict new values for missing values in *a2* and *a14*.* You will likely get this small warning: Warning: The mean of column a14 differs from the solution file by 0.20238937709643778. (might not matter if small) Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to ```/content/drive```. ###Code try: from google.colab import drive drive.mount('/content/drive', force_remount=True) COLAB = True print("Note: using Google CoLab") %tensorflow_version 2.x except: print("Note: not using Google CoLab") COLAB = False ###Output _____no_output_____ ###Markdown Assignment Submit FunctionYou will submit the ten programming assignments electronically. The following **submit** function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any underlying problems. **It is unlikely that should need to modify this function.** ###Code import base64 import os import numpy as np import pandas as pd import requests import PIL import PIL.Image import io # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - List of pandas dataframes or images. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) payload = [] for item in data: if type(item) is PIL.Image.Image: buffered = BytesIO() item.save(buffered, format="PNG") payload.append({'PNG':base64.b64encode(buffered.getvalue()).decode('ascii')}) elif type(item) is pd.core.frame.DataFrame: payload.append({'CSV':base64.b64encode(item.to_csv(index=False).encode('ascii')).decode("ascii")}) r= requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={ 'payload': payload,'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code==200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ###Output _____no_output_____ ###Markdown Assignment 4 Sample CodeThe following code provides a starting point for this assignment. ###Code import os import pandas as pd from scipy.stats import zscore from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation import pandas as pd import io import requests import numpy as np from sklearn import metrics # This is your student key that I emailed to you at the beginnning of the semester. key = "Gx5en9cEVvaZnjut6vfLm1HG4ZO4PsI32sgldAXj" # This is an example key and will not work. # You must also identify your source file. (modify for your local setup) # file='/content/drive/My Drive/Colab Notebooks/assignment_yourname_class4.ipynb' # Google CoLab # file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class4.ipynb' # Windows file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class4.ipynb' # Mac/Linux # Begin assignment df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/crx.csv",na_values=['?']) df from scipy.stats import zscore from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping from sklearn.preprocessing import OneHotEncoder from sklearn.preprocessing import LabelEncoder from matplotlib import pyplot from keras import metrics def chart_regression(pred, y, sort=True): t = pd.DataFrame({'pred': pred, 'y': y.to_numpy().flatten()}) if sort: t.sort_values(by=['y'], inplace=True) pyplot.plot(t['y'].tolist(), label='expected') pyplot.plot(t['pred'].tolist(), label='prediction') pyplot.ylabel('output') pyplot.legend() pyplot.show() def onehotcoding(data): OH_encoder = OneHotEncoder(handle_unknown='ignore', sparse=False) OH_cols = pd.DataFrame(OH_encoder.fit_transform(data.values.reshape(-1,1))) return OH_cols def labelencoding(data): encoder = LabelEncoder() encoded = pd.DataFrame(encoder.fit_transform(data.values.reshape(-1,1))) return encoded def preprocess_data(df, target): if target: df = labelencoding(df) return df else: columns = [x for x in df.columns if str(df[x].dtypes) in ('object') and x != 'a13'] for x in columns: df[x] = df[x].map({'t':1, 'f':0}) oh_columns = onehotcoding(df['a13']) oh_columns.index = df.index df.drop(columns='a13', axis=1, inplace=True) df_concat = pd.concat([df, oh_columns], axis=1, ignore_index=True) df_concat.index = df.index return df_concat def fill_missing_numeric(df,current,target): X = df.drop([target], axis=1) y = df[target] X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0) train_columns = ['s3','a8','a9','a10','a11','a12','a13','a15'] X_train = preprocess_data(X_train[train_columns], False) X_test = preprocess_data(X_test[train_columns], False) y_train = preprocess_data(y_train, True) y_test = preprocess_data(y_test, True) merged_df = X_train.append(X_test).reset_index().set_index('index').sort_index() model = create_network(X_train, X_test, y_train, y_test) # Fill in as needed mask_nan = df.loc[pd.isna(df[current]), :].index predictions = model.predict(merged_df.iloc[mask_nan]) df.loc[mask_nan, current] = predictions print(df.loc[mask_nan]) def create_network(X_train, X_test, y_train, y_test): model = Sequential() model.add(Dense(50, input_dim=X_train.shape[1], activation='relu')) # Hidden 1 model.add(Dense(25, activation='relu')) # Hidden 2 model.add(Dense(1)) # Output model.compile(loss='mean_squared_error', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=10, verbose=2, mode='auto', restore_best_weights=True) model.fit(X_train,y_train,validation_data=(X_test,y_test), callbacks=[monitor],verbose=0,epochs=100) pred = model.predict(X_test) print(pred.shape) chart_regression(pred.flatten(), y_test) return model df_submit = fill_missing_numeric(df,'a2','a16') df_submit ###Output /var/folders/2j/m7m8jdyd0lldlm4fstt86mrh0000gn/T/ipykernel_64347/649713166.py:41: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df[x] = df[x].map({'t':1, 'f':0}) /opt/homebrew/Caskroom/miniforge/base/envs/tensorflow/lib/python3.9/site-packages/pandas/core/frame.py:4906: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy return super().drop( /var/folders/2j/m7m8jdyd0lldlm4fstt86mrh0000gn/T/ipykernel_64347/649713166.py:41: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df[x] = df[x].map({'t':1, 'f':0}) /opt/homebrew/Caskroom/miniforge/base/envs/tensorflow/lib/python3.9/site-packages/pandas/core/frame.py:4906: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy return super().drop( /opt/homebrew/Caskroom/miniforge/base/envs/tensorflow/lib/python3.9/site-packages/sklearn/preprocessing/_label.py:115: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). y = column_or_1d(y, warn=True) 2021-12-28 12:53:54.902801: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled. 2021-12-28 12:53:55.093754: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled. ###Markdown T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 4 Assignment: Classification and Regression Neural Network****Student Name: Your Name** Assignment InstructionsFor this assignment, you will use the **crx.csv** dataset. This dataset is a public dataset that can you can find [here](https://archive.ics.uci.edu/ml/datasets/credit+approval). You should use the CSV file on my data site, at this location: [crx.csv](https://data.heatonresearch.com/data/t81-558/crx.csv) because it includes column headers. The primary use for this dataset is binary classification. There are 15 attributes, plus a target column that contains only + or -. Some of the columns have missing values.You should train a neural network and return the predictions. You will submit these predictions to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Complete the following tasks:* Your task is to replace missing values in columns *a2* and *a14* with values estimated by a neural network (one neural network for *a2* and another for *a14*).* Your submission file will contain the same headers as the source CSV: *a1*, *a2*, *s3*, *a4*, *a5*, *a6*, *a7*, *a8*, *a9*, *a10*, *a11*, *a12*, *a13*, *a14*, *a15*, and *a16*.* You should only need to modify *a2* and *a14*.* Neural networks can be much more powerful at filling missing variables than median and mean.* Train two neural networks to predict *a2* and *a14*. * The *y* (target) for training the two nets will be *a2* and *a14*, depending on which you are trying to fill.* The x for training the two nets will be 's3','a8','a9','a10','a11','a12','a13','a15'. These are chosen because it is important not to use any columns with missing values; also, it could cause unwanted bias if we include the ultimate target (*a16*).* ONLY predict new values for missing values in *a2* and *a14*.* You will likely get this small warning: Warning: The mean of column a14 differs from the solution file by 0.20238937709643778. (might not matter if small) Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to ```/content/drive```. ###Code try: from google.colab import drive drive.mount('/content/drive', force_remount=True) COLAB = True print("Note: using Google CoLab") %tensorflow_version 2.x except: print("Note: not using Google CoLab") COLAB = False ###Output _____no_output_____ ###Markdown Assignment Submit FunctionYou will submit the ten programming assignments electronically. The following **submit** function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any underlying problems. **It is unlikely that should need to modify this function.** ###Code import base64 import os import numpy as np import pandas as pd import requests # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - Pandas dataframe output. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) r = requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"), 'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code == 200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ###Output _____no_output_____ ###Markdown Assignment 4 Sample CodeThe following code provides a starting point for this assignment. ###Code import os import pandas as pd from scipy.stats import zscore from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation import pandas as pd import io import requests import numpy as np from sklearn import metrics # This is your student key that I emailed to you at the beginnning of the semester. key = "PPboscDL2M94HCbkbvfOLakXXNy3dh5x2VV1Mlpm" # This is an example key and will not work. # You must also identify your source file. (modify for your local setup) # file='/content/drive/My Drive/Colab Notebooks/assignment_yourname_class3.ipynb' # Google CoLab # file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class3.ipynb' # Windows file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class3.ipynb' # Mac/Linux # Begin assignment df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/crx.csv",na_values=['?']) submit(source_file=file,data=df_submit,key=key,no=4) # Below is just a suggestion. These are the imports that I used. from scipy.stats import zscore from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping def fill_missing_numeric(df,current,target): # Fill in as needed return None df_submit = fill_missing_numeric(df,'a2','a16') df_submit = fill_missing_numeric(df,'a14','a16') # Submit submit(source_file=file,data=df_submit,key=key,no=4) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 4 Assignment: Regression Neural Network****Student Name: Your Name** Assignment InstructionsFor this assignment you will use the **reg-30-spring-2018.csv** dataset. This is a dataset that I generated specifically for this semester. You can find the CSV file in the **data** directory of the class GitHub repository here: [reg-30-spring-2018.csv](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/data/reg-30-spring-2018.csv).For this assignment you will train a neural network and return the predictions. You will submit these predictions to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Complete the following tasks:* Normalize all numeric to zscores and all text/categorical to dummies. Do not normalize the *target*.* Your target (y) is the filed named *target*.* If you find any missing values (NA's), replace them with the median values for that column.* No need for any cross validation or holdout. Just train on the entire data set for 250 epochs.* You might get a warning, such as **"Warning: The mean of column pred differs from the solution file by 2.39"**. Do not worry about small values, it would be very hard to get exactly the same result as I did.* Your submitted dataframe will have these columns: id, pred. Helpful FunctionsYou will see these at the top of every module and assignment. These are simply a set of reusable functions that we will make use of. Each of them will be explained as the semester progresses. They are explained in greater detail as the course progresses. Class 4 contains a complete overview of these functions. ###Code import base64 import os import matplotlib.pyplot as plt import numpy as np import pandas as pd import requests from sklearn import preprocessing # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1 # at every location where the original column (name) matches each of the target_values. One column is added for # each target value. def encode_text_single_dummy(df, name, target_values): for tv in target_values: l = list(df[name].astype(str)) l = [1 if str(x) == str(tv) else 0 for x in l] name2 = f"{name}-{tv}" df[name2] = l # Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue). def encode_text_index(df, name): le = preprocessing.LabelEncoder() df[name] = le.fit_transform(df[name]) return le.classes_ # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Convert all missing values in the specified column to the median def missing_median(df, name): med = df[name].median() df[name] = df[name].fillna(med) # Convert all missing values in the specified column to the default def missing_default(df, name, default_value): df[name] = df[name].fillna(default_value) # Convert a Pandas dataframe to the x,y inputs that TensorFlow needs def to_xy(df, target): result = [] for x in df.columns: if x != target: result.append(x) # find out the type of the target column. Is it really this hard? :( target_type = df[target].dtypes target_type = target_type[0] if hasattr( target_type, '__iter__') else target_type # Encode to int for classification, float otherwise. TensorFlow likes 32 bits. if target_type in (np.int64, np.int32): # Classification dummies = pd.get_dummies(df[target]) return df[result].values.astype(np.float32), dummies.values.astype(np.float32) # Regression return df[result].values.astype(np.float32), df[[target]].values.astype(np.float32) # Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return f"{h}:{m:>02}:{s:>05.2f}" # Regression chart. def chart_regression(pred, y, sort=True): t = pd.DataFrame({'pred': pred, 'y': y.flatten()}) if sort: t.sort_values(by=['y'], inplace=True) plt.plot(t['y'].tolist(), label='expected') plt.plot(t['pred'].tolist(), label='prediction') plt.ylabel('output') plt.legend() plt.show() # Remove all rows where the specified column is +/- sd standard deviations def remove_outliers(df, name, sd): drop_rows = df.index[(np.abs(df[name] - df[name].mean()) >= (sd * df[name].std()))] df.drop(drop_rows, axis=0, inplace=True) # Encode a column to a range between normalized_low and normalized_high. def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1, data_low=None, data_high=None): if data_low is None: data_low = min(df[name]) data_high = max(df[name]) df[name] = ((df[name] - data_low) / (data_high - data_low)) \ * (normalized_high - normalized_low) + normalized_low # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - Pandas dataframe output. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) r = requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"), 'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code == 200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ###Output _____no_output_____ ###Markdown Assignment 4 Sample CodeThe following code provides a starting point for this assignment. ###Code import os import pandas as pd from scipy.stats import zscore from keras.models import Sequential from keras.layers.core import Dense, Activation import pandas as pd import io import requests import numpy as np from sklearn import metrics # This is your student key that I emailed to you at the beginnning of the semester. key = "qgABjW9GKV1vvFSQNxZW9akByENTpTAo2T9qOjmh" # This is an example key and will not work. # You must also identify your source file. (modify for your local setup) # file='/resources/t81_558_deep_learning/assignment_yourname_class1.ipynb' # IBM Data Science Workbench # file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\t81_558_class1_intro_python.ipynb' # Windows # file='/Users/jeff/projects/t81_558_deep_learning/assignment_yourname_class1.ipynb' # Mac/Linux file = '...location of your source file...' # Begin assignment path = "./data/" filename_read = os.path.join(path,"reg-30-spring-2018.csv") df = pd.read_csv(filename_read) # Encode the feature vector ids = df['id'] # Save a copy, if you like submit_df.to_csv('4.csv',index=False) # Submit the assignment submit(source_file=file,data=submit_df,key=key,no=4) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 4 Assignment: Classification and Regression Neural Network****Student Name: Your Name** Assignment InstructionsFor this assignment you will use the **crx.csv** dataset. This is a public dataset that can be found [here](https://archive.ics.uci.edu/ml/datasets/credit+approval). You should use the CSV file on my data site, at this location: [crx.csv](https://data.heatonresearch.com/data/t81-558/crx.csv) because it includes column headers. This is a dataset that is usually used for binary classification. There are 15 attributes, plus a target column that contains only + or -. Some of the columns have missing values.For this assignment you will train a neural network and return the predictions. You will submit these predictions to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Complete the following tasks:* Your task is to replace missing values in columns *a2* and *a14* with values estimated by a neural network (one neural network for *a2* and another for *a14*).* Your submission file will contain the same headers as the source CSV: *a1*, *a2*, *s3*, *a4*, *a5*, *a6*, *a7*, *a8*, *a9*, *a10*, *a11*, *a12*, *a13*, *a14*, *a15*, and *a16*.* You should only need to modify *a2* and *a14*.* Neural networks can be much more powerful at filling missing variables than median and mean.* Train two neural networks to predict *a2* and *a14*. * The y (target) for training the two nets will be *a2* and *a14*, depending on which you are trying to fill.* The x for training the two nets will be 's3','a8','a9','a10','a11','a12','a13','a15'. These are chosen because it is important not to use any columns with missing values, also it could cause unwanted bias if we include the ultimate target (*a16*).* ONLY predict new values for missing values in *a2* and *a14*.* You will likely get this small warning: Warning: The mean of column a14 differs from the solution file by 0.20238937709643778. (might not matter if small) Assignment Submit FunctionYou will submit the 10 programming assignments electronically. The following submit function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any basic problems. **It is unlikely that should need to modify this function.** ###Code import base64 import os import numpy as np import pandas as pd import requests # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - Pandas dataframe output. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) r = requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"), 'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code == 200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ###Output _____no_output_____ ###Markdown Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to /content/drive. ###Code from google.colab import drive drive.mount('/content/drive') !ls /content/drive/My\ Drive/Colab\ Notebooks ###Output _____no_output_____ ###Markdown Assignment 4 Sample CodeThe following code provides a starting point for this assignment. ###Code import os import pandas as pd from scipy.stats import zscore from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation import pandas as pd import io import requests import numpy as np from sklearn import metrics # This is your student key that I emailed to you at the beginnning of the semester. key = "PPboscDL2M94HCbkbvfOLakXXNy3dh5x2VV1Mlpm" # This is an example key and will not work. # You must also identify your source file. (modify for your local setup) # file='/resources/t81_558_deep_learning/assignment_yourname_class1.ipynb' # IBM Data Science Workbench # file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\t81_558_class1_intro_python.ipynb' # Windows file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class4.ipynb' # Mac/Linux #file = "C:\\Users\\jeffh\\Dropbox\\school\\teaching\\wustl\\classes\\T81_558_deep_learning\\solutions\\assignment_solution_class4.ipynb" # Begin assignment df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/crx.csv",na_values=['?']) submit(source_file=file,data=df_submit,key=key,no=4) # Below is just a suggestion. These are the imports that I used. from scipy.stats import zscore from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping def fill_missing_numeric(df,current,target): # Fill in as needed return None df_submit = fill_missing_numeric(df,'a2','a16') df_submit = fill_missing_numeric(df,'a14','a16') # Submit submit(source_file=file,data=df_submit,key=key,no=4) ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).**Module 4 Assignment: Classification and Regression Neural Network****Student Name: Your Name** Assignment InstructionsFor this assignment, you will use the **crx.csv** dataset. This dataset is a public dataset that can you can find [here](https://archive.ics.uci.edu/ml/datasets/credit+approval). You should use the CSV file on my data site, at this location: [crx.csv](https://data.heatonresearch.com/data/t81-558/crx.csv) because it includes column headers. The primary use for this dataset is binary classification. There are 15 attributes, plus a target column that contains only + or -. Some of the columns have missing values.You should train a neural network and return the predictions. You will submit these predictions to the **submit** function. See [Assignment 1](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class1.ipynb) for details on how to submit an assignment or check that one was submitted.Complete the following tasks:* Your task is to replace missing values in columns *a2* and *a14* with values estimated by a neural network (one neural network for *a2* and another for *a14*).* Your submission file will contain the same headers as the source CSV: *a1*, *a2*, *s3*, *a4*, *a5*, *a6*, *a7*, *a8*, *a9*, *a10*, *a11*, *a12*, *a13*, *a14*, *a15*, and *a16*.* You should only need to modify *a2* and *a14*.* Neural networks can be much more powerful at filling missing variables than median and mean.* Train two neural networks to predict *a2* and *a14*. * The *y* (target) for training the two nets will be *a2* and *a14*, depending on which you are trying to fill.* The x for training the two nets will be 's3','a8','a9','a10','a11','a12','a13','a15'. These are chosen because it is important not to use any columns with missing values; also, it could cause unwanted bias if we include the ultimate target (*a16*).* ONLY predict new values for missing values in *a2* and *a14*.* You will likely get this small warning: Warning: The mean of column a14 differs from the solution file by 0.20238937709643778. (might not matter if small) Google CoLab InstructionsIf you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to ```/content/drive```. ###Code try: from google.colab import drive drive.mount('/content/drive', force_remount=True) COLAB = True print("Note: using Google CoLab") %tensorflow_version 2.x except: print("Note: not using Google CoLab") COLAB = False ###Output _____no_output_____ ###Markdown Assignment Submit FunctionYou will submit the ten programming assignments electronically. The following **submit** function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any underlying problems. **It is unlikely that should need to modify this function.** ###Code import base64 import os import numpy as np import pandas as pd import requests import PIL import PIL.Image import io # This function submits an assignment. You can submit an assignment as much as you like, only the final # submission counts. The paramaters are as follows: # data - List of pandas dataframes or images. # key - Your student key that was emailed to you. # no - The assignment class number, should be 1 through 1. # source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name. # . The number must match your assignment number. For example "_class2" for class assignment #2. def submit(data,key,no,source_file=None): if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.') if source_file is None: source_file = __file__ suffix = '_class{}'.format(no) if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix)) with open(source_file, "rb") as image_file: encoded_python = base64.b64encode(image_file.read()).decode('ascii') ext = os.path.splitext(source_file)[-1].lower() if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext)) payload = [] for item in data: if type(item) is PIL.Image.Image: buffered = BytesIO() item.save(buffered, format="PNG") payload.append({'PNG':base64.b64encode(buffered.getvalue()).decode('ascii')}) elif type(item) is pd.core.frame.DataFrame: payload.append({'CSV':base64.b64encode(item.to_csv(index=False).encode('ascii')).decode("ascii")}) r= requests.post("https://api.heatonresearch.com/assignment-submit", headers={'x-api-key':key}, json={ 'payload': payload,'assignment': no, 'ext':ext, 'py':encoded_python}) if r.status_code==200: print("Success: {}".format(r.text)) else: print("Failure: {}".format(r.text)) ###Output _____no_output_____ ###Markdown Assignment 4 Sample CodeThe following code provides a starting point for this assignment. ###Code import os import pandas as pd from scipy.stats import zscore from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation import pandas as pd import io import requests import numpy as np from sklearn import metrics # This is your student key that I emailed to you at the beginnning of the semester. key = "Gx5en9cEVvaZnjut6vfLm1HG4ZO4PsI32sgldAXj" # This is an example key and will not work. # You must also identify your source file. (modify for your local setup) # file='/content/drive/My Drive/Colab Notebooks/assignment_yourname_class4.ipynb' # Google CoLab # file='C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class4.ipynb' # Windows file='/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class4.ipynb' # Mac/Linux # Begin assignment df = pd.read_csv("https://data.heatonresearch.com/data/t81-558/crx.csv",na_values=['?']) # Below is just a suggestion. These are the imports that I used. from scipy.stats import zscore from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping def fill_missing_numeric(df,current,target): # Fill in as needed return None df_submit = fill_missing_numeric(df,'a2','a16') df_submit = fill_missing_numeric(df,'a14','a16') # Submit submit(source_file=file,data=[df_submit],key=key,no=4) ###Output _____no_output_____
11-Arquivos.ipynb
###Markdown Trabalhando com arquivos ###Code # Cria o arquivo se não existir arquivo = open('hightech.txt','w') # r ler w escrever # Escreve no arquivo arquivo.write('Python\n') arquivo.write('Dados\n') arquivo.write('Internet das Coisas\n') arquivo.write('Realidade Virtual e Aumentada\n') arquivo.write('Revolução 4.0\n') # Abre no modo leitura arquivo = open('hightech.txt','r') # Informações sobre o objeto print(arquivo) type(arquivo) # Lendo o arquivo arq = arquivo.read() arq # Ler os primeiros 6 caracteres print(arquivo.read(6)) # Contar o número de caracteres print(arquivo.tell()) # Retornar para o iníco do arquivo - Cursor print(arquivo.seek(0,0)) # Printa linha a linha print(arquivo.readline()) # Percorre cada linha com for for linha in arquivo: print(linha) arquivo = open('hightech.txt','r') # Percorre cada linha com for usando o readlines() for linha in arquivo.readlines(): print(linha) arquivo = open('hightech.txt','r') # Percorre cada linha com list comprehension e salva numa lista palavras = [l for l in arquivo] palavras # Acrescentando conteúdo - Modo append arq = open("hightech.txt", "a") arq.write("I.A") # Grava com entrada de dados do usuário filename = input('Digite um nome: ') arq = open(filename+".txt", "w") # Concatenação de textos txt = "Bem vindo a linguagem " txt += "Python" txt txt = "Python é uma linguagem de programação de alto nível, interpretada, de script, imperativa, orientada a objetos, funcional, de tipagem dinâmica e forte. Foi lançada por Guido van Rossum em 1991. Atualmente possui um modelo de desenvolvimento comunitário, aberto e gerenciado pela organização sem fins lucrativos Python Software Foundation." arq = open('texto.txt', 'w') arq.write(txt) arq = open('texto.txt', 'r') for i in arq: print(i) # Com with o arquivo é fechado automaticamente with open('texto.txt','r') as arq: conteudo = arq.read() conteudo # Total de caracteres len(conteudo) # Copiando o conteúdo de um arquivo para outro arquivo1 = 'hightech.txt' arquivo2 = 'texto.txt' open(arquivo1,'w').write(open(arquivo2,'r').read()) arq = open(arquivo2, 'r') for i in arq: print(i) # Libera memória arquivo.flush() # Qual modo foi aberto print('Modo: ',arquivo.mode) # O nome do arquivo com extensão print(arquivo.name) # Fecha o arquivo e libera memória arquivo.close() # Verifica se está fechado print(arquivo.closed) # Gera um arquivo html pagina = open('index.html','w', encoding='utf-8') pagina.write(""" <!DOCTYPE html>\n <html lang="pt-br">\n <head>\n <title>Python</title>\n </head>\n <body>\n <h1>Python é uma linguagem de programação de alto nível</h1>\n </body>\n </html>\n""") ###Output _____no_output_____
nbs/regexMethod.ipynb
###Markdown Extract different patterns as an experiment ###Code regex_list =[ r'([A-Z]{2,}\-[a-zA-Z]+)', r'([A-Z]{2,}\s+\d)', r'\\b[A-Z](?:[\\.&]?[A-Z]){1,7}\\b', r'(\b(?:[a-zA-Z]\.){2,})', r'(?:(?<=\.|\s)[A-Z]\.)+', r'([A-Z]{2,})' ] reg_convert = regex_list[::-1] for r in regex_list: ls = [] ac = data['text'].str.findall(r) ls.append(ac) print(ls) test['text'].str.findall('(\\b[A-Z](?:[\\.&]?[A-Z]){1,7}\\b)') test['text'].str.findall('|'.join(regex_list)) data['acronyms_extract'] = data['text'].str.findall('|'.join(regex_list)) data.to_excel('trial1.xlsx') test['text'][1] ###Output _____no_output_____ ###Markdown Baseline: find anything that is capitalized ###Code test['acronyms_found'] = test['text'].str.findall('([A-Z]{2,})') test.head(10) ###Output _____no_output_____
how-to-use-azureml/automated-machine-learning/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb
###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.30.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.34.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'AUC_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification', automl_run=automl_run) ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_explainer_setup_obj.automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.15.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import os import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.2, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. Test the fitted modelNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Visualization model's feature importance in azure portal6. Explore any model's explanation and explore feature importance in azure portal7. Test the fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.explain.model._internal.explanation_client import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.4.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. Best Model 's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown ExplanationsIn this section, we will show how to compute model explanations and visualize the explanations using azureml-explain-model package. Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml.explain.model package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.explain.model.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Test the fitted modelNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.32.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification', automl_run=automl_run) ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_explainer_setup_obj.automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Visualization model's feature importance in azure portal6. Explore any model's explanation and explore feature importance in azure portal7. Test the fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.explain.model._internal.explanation_client import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.5.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. Best Model 's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown ExplanationsIn this section, we will show how to compute model explanations and visualize the explanations using azureml-explain-model package. Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml.explain.model package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.explain.model.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Test the fitted modelNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import os import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "preprocess": True, "experiment_timeout_hours": 0.2, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. Test the fitted modelNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.33.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification', automl_run=automl_run) ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_explainer_setup_obj.automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.explain.model._internal.explanation_client import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.11.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-explain-model package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml.explain.model package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.explain.model.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.explain.model.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.explain.model package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to create the conda dependencies comprising of the azureml-explain-model, azureml-train-automl and azureml-defaults packages. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import numpy as np import pandas as pd import os import pickle import azureml.train.automl import azureml.explain.model from azureml.train.automl.runtime.automl_explain_utilities import AutoMLExplainerSetupClass, \ automl_setup_model_explanations import joblib from azureml.core.model import Model def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.23.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.20.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.explain.model._internal.explanation_client import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.10.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-explain-model package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml.explain.model package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.explain.model.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.explain.model.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.explain.model package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to create the conda dependencies comprising of the azureml-explain-model, azureml-train-automl and azureml-defaults packages. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import numpy as np import pandas as pd import os import pickle import azureml.train.automl import azureml.explain.model from azureml.train.automl.runtime.automl_explain_utilities import AutoMLExplainerSetupClass, \ automl_setup_model_explanations import joblib from azureml.core.model import Model def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.38.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'AUC_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification', automl_run=automl_run) ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_explainer_setup_obj.automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.21.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.18.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.25.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code ws = Workspace.from_config() # choose a name for experiment experiment_name = "automl-classification-ccard-local" experiment = Experiment(ws, experiment_name) output = {} output["Subscription ID"] = ws.subscription_id output["Workspace"] = ws.name output["Resource Group"] = ws.resource_group output["Location"] = ws.location output["Experiment Name"] = experiment.name output["SDK Version"] = azureml.core.VERSION pd.set_option("display.max_colwidth", None) outputDf = pd.DataFrame(data=output, index=[""]) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = "Class" ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": "average_precision_score_weighted", "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False, } automl_config = AutoMLConfig( task="classification", debug_log="automl_errors.log", training_data=training_data, label_column_name=label_column_name, **automl_settings, ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) # If you need to retrieve a run that already started, use the following code # from azureml.train.automl.run import AutoMLRun # local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns( columns=[label_column_name] ).to_pandas_dataframe() y_test_df = validation_data.keep_columns( columns=[label_column_name], validate=True ).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf = confusion_matrix(y_test_df.values, y_pred) plt.imshow(cf, cmap=plt.cm.Blues, interpolation="nearest") plt.colorbar() plt.title("Confusion Matrix") plt.xlabel("Predicted") plt.ylabel("Actual") class_labels = ["False", "True"] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks, class_labels) plt.yticks([-0.5, 0, 1, 1.5], ["", "False", "True", ""]) # plotting text value inside cells thresh = cf.max() / 2.0 for i, j in itertools.product(range(cf.shape[0]), range(cf.shape[1])): plt.text( j, i, format(cf[i, j], "d"), horizontalalignment="center", color="white" if cf[i, j] > thresh else "black", ) plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print( "You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url() ) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print( "You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url() ) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric="accuracy") ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import ( automl_setup_model_explanations, ) automl_explainer_setup_obj = automl_setup_model_explanations( fitted_model, X=X_train, X_test=X_test, y=y_train, task="classification", automl_run=automl_run, ) ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper( ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_explainer_setup_obj.automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params, ) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain( ["local", "global"], eval_dataset=automl_explainer_setup_obj.X_test_transform ) print(engineered_explanations.get_feature_importance_dict()) print( "You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url() ) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain( ["local", "global"], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw, ) print(raw_explanations.get_feature_importance_dict()) print( "You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url() ) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer( explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map] ) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = "scoring_explainer.pkl" with open(scoring_explainer_file_name, "wb") as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file("outputs/scoring_explainer.pkl", scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model( model_name="automl_model", model_path="outputs/model.pkl" ) scoring_explainer_model = automl_run.register_model( model_name="scoring_explainer", model_path="outputs/scoring_explainer.pkl" ) ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, "myenv.yml") myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import ( automl_setup_model_explanations, ) def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path("automl_model") scoring_explainer_path = Model.get_model_path("scoring_explainer") automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient="records") # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations( automl_model, X_test=data, task="classification" ) # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain( automl_explainer_setup_obj.X_test_transform ) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain( automl_explainer_setup_obj.X_test_transform, get_raw=True ) # You can return any data type as long as it is JSON-serializable return { "predictions": predictions.tolist(), "engineered_local_importance_values": engineered_local_importance_values, "raw_local_importance_values": raw_local_importance_values, } ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script="score.py", environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = "scoring-explain" # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print("Found existing cluster, use it.") except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size="STANDARD_D3_V2") aks_target = ComputeTarget.create( workspace=ws, name=aks_name, provisioning_configuration=prov_config ) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name = "model-scoring-local-aks" aks_service = Model.deploy( workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target, ) aks_service.wait_for_deployment(show_output=True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient="records") print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print("predictions:\n{}\n".format(output["predictions"])) # Print the engineered feature importances for the predicted value print( "engineered_local_importance_values:\n{}\n".format( output["engineered_local_importance_values"] ) ) # Print the raw feature importances for the predicted value print( "raw_local_importance_values:\n{}\n".format(output["raw_local_importance_values"]) ) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.16.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.explain.model._internal.explanation_client import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.7.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-explain-model package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml.explain.model package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.explain.model.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.explain.model.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.explain.model package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to create the conda dependencies comprising of the azureml-explain-model, azureml-train-automl and azureml-defaults packages. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import numpy as np import pandas as pd import os import pickle import azureml.train.automl import azureml.explain.model from azureml.train.automl.runtime.automl_explain_utilities import AutoMLExplainerSetupClass, \ automl_setup_model_explanations import joblib from azureml.core.model import Model def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import os import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "preprocess": True, "experiment_timeout_minutes": 10, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ablity to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object.See *Print the properties of the model* section in [this sample notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification/auto-ml-classification.ipynb). DeployTo deploy the model into a web service endpoint, see _Deploy_ section in [this sample notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-with-deployment/auto-ml-classification-with-deployment.ipynb) Test the fitted modelNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.19.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Test the fitted modelNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.35.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'AUC_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification', automl_run=automl_run) ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_explainer_setup_obj.automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.37.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'AUC_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification', automl_run=automl_run) ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_explainer_setup_obj.automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.29.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.39.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', None) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'AUC_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification', automl_run=automl_run) ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_explainer_setup_obj.automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Visualization model's feature importance in azure portal6. Explore any model's explanation and explore feature importance in azure portal7. Test the fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.explain.model._internal.explanation_client import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.6.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. Best Model 's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown ExplanationsIn this section, we will show how to compute model explanations and visualize the explanations using azureml-explain-model package. Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml.explain.model package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.explain.model.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.17.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.22.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.28.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import os import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. Test the fitted modelNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Visualization model's feature importance in azure portal6. Explore any model's explanation and explore feature importance in azure portal7. Test the fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.explain.model._internal.explanation_client import ExplanationClient ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. Best Model 's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown ExplanationsIn this section, we will show how to compute model explanations and visualize the explanations using azureml-explain-model package. Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml.explain.model package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a LightGBM model which acts as a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.explain.model.mimic.models.lightgbm_model import LGBMExplainableModel from azureml.explain.model.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Test the fitted modelNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.explain.model._internal.explanation_client import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.8.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-explain-model package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml.explain.model package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.explain.model.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.explain.model.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.explain.model package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to create the conda dependencies comprising of the azureml-explain-model, azureml-train-automl and azureml-defaults packages. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import numpy as np import pandas as pd import os import pickle import azureml.train.automl import azureml.explain.model from azureml.train.automl.runtime.automl_explain_utilities import AutoMLExplainerSetupClass, \ automl_setup_model_explanations import joblib from azureml.core.model import Model def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.31.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification', automl_run=automl_run) ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_explainer_setup_obj.automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.26.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.24.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.explain.model._internal.explanation_client import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.9.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-explain-model package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml.explain.model package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.explain.model.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.explain.model.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.explain.model package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to create the conda dependencies comprising of the azureml-explain-model, azureml-train-automl and azureml-defaults packages. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import numpy as np import pandas as pd import os import pickle import azureml.train.automl import azureml.explain.model from azureml.train.automl.runtime.automl_explain_utilities import AutoMLExplainerSetupClass, \ automl_setup_model_explanations import joblib from azureml.core.model import Model def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Test)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Visualization model's feature importance in azure portal6. Explore any model's explanation and explore feature importance in azure portal7. Test the fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.explain.model._internal.explanation_client import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.3.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. Best Model 's explanationRetrieve the explanation from the best_run which includes explanations for engineered features and raw features. Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown ExplanationsIn this section, we will show how to compute model explanations and visualize the explanations using azureml-explain-model package. Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml.explain.model package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a LightGBM model which acts as a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.explain.model.mimic.models.lightgbm_model import LGBMExplainableModel from azureml.explain.model.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Test the fitted modelNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.36.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'AUC_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download the engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Download the raw feature importance from artifact storeYou can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = client.download_model_explanation(raw=True) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification', automl_run=automl_run) ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_explainer_setup_obj.automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features. ###Code # Compute the raw explanations raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform, raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) print(raw_explanations.get_feature_importance_dict()) print("You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import joblib import pandas as pd from azureml.core.model import Model from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # Retrieve model explanations for raw explanations raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values, 'raw_local_importance_values': raw_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) # Print the raw feature importances for the predicted value print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png) Automated Machine Learning_**Classification of credit card fraudulent transactions with local run **_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Train](Train)1. [Results](Results)1. [Test](Tests)1. [Explanation](Explanation)1. [Acknowledgements](Acknowledgements) IntroductionIn this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.This notebook is using the local machine compute to train the model.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. In this notebook you will learn how to:1. Create an experiment using an existing workspace.2. Configure AutoML using `AutoMLConfig`.3. Train the model.4. Explore the results.5. Test the fitted model.6. Explore any model's explanation and explore feature importance in azure portal.7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import pandas as pd import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.core.dataset import Dataset from azureml.train.automl import AutoMLConfig from azureml.interpret._internal.explanation_client import ExplanationClient ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.12.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for experiment experiment_name = 'automl-classification-ccard-local' experiment=Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Load DataLoad the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. ###Code data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv" dataset = Dataset.Tabular.from_delimited_files(data) training_data, validation_data = dataset.random_split(percentage=0.8, seed=223) label_column_name = 'Class' ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**enable_early_stopping**|Stop the run if the metric score is not showing improvement.||**n_cross_validations**|Number of cross validation splits.||**training_data**|Input dataset, containing both features and label column.||**label_column_name**|The name of the label column.|**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainprimary-metric) ###Code automl_settings = { "n_cross_validations": 3, "primary_metric": 'average_precision_score_weighted', "experiment_timeout_hours": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible "verbosity": logging.INFO, "enable_stack_ensemble": False } automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', training_data = training_data, label_column_name = label_column_name, **automl_settings ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) # If you need to retrieve a run that already started, use the following code #from azureml.train.automl.run import AutoMLRun #local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>') local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Analyze results Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() fitted_model ###Output _____no_output_____ ###Markdown Print the properties of the modelThe fitted_model is a python object and you can read the different properties of the object. TestsNow that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values. ###Code # convert the test data to dataframe X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe() y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe() # call the predict functions on the model y_pred = fitted_model.predict(X_test_df) y_pred ###Output _____no_output_____ ###Markdown Calculate metrics for the predictionNow visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values from the trained model that was returned. ###Code from sklearn.metrics import confusion_matrix import numpy as np import itertools cf =confusion_matrix(y_test_df.values,y_pred) plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest') plt.colorbar() plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') class_labels = ['False','True'] tick_marks = np.arange(len(class_labels)) plt.xticks(tick_marks,class_labels) plt.yticks([-0.5,0,1,1.5],['','False','True','']) # plotting text value inside cells thresh = cf.max() / 2. for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])): plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black') plt.show() ###Output _____no_output_____ ###Markdown ExplanationIn this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data. Run the explanation Download engineered feature importance from artifact storeYou can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code client = ExplanationClient.from_run(best_run) engineered_explanations = client.download_model_explanation(raw=False) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + best_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Retrieve any other AutoML model from training ###Code automl_run, fitted_model = local_run.get_output(metric='accuracy') ###Output _____no_output_____ ###Markdown Setup the model explanations for AutoML modelsThe fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-1. Featurized data from train samples/test samples2. Gather engineered name lists3. Find the classes in your labeled column in classification scenariosThe automl_explainer_setup_obj contains all the structures from above list. ###Code X_train = training_data.drop_columns(columns=[label_column_name]) y_train = training_data.keep_columns(columns=[label_column_name], validate=True) X_test = validation_data.drop_columns(columns=[label_column_name]) from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded. ###Code from interpret.ext.glassbox import LGBMExplainableModel from azureml.interpret.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, explainable_model=automl_explainer_setup_obj.surrogate_model, init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map], classes=automl_explainer_setup_obj.classes, explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features. ###Code # Compute the engineered explanations engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) print("You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\n" + automl_run.get_portal_url()) ###Output _____no_output_____ ###Markdown Initialize the scoring Explainer, save and upload it for later use in scoring explanation ###Code from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer import joblib # Initialize the ScoringExplainer scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) # Pickle scoring explainer locally to './scoring_explainer.pkl' scoring_explainer_file_name = 'scoring_explainer.pkl' with open(scoring_explainer_file_name, 'wb') as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name) ###Output _____no_output_____ ###Markdown Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service. ###Code # Register trained automl model present in the 'outputs' folder in the artifacts original_model = automl_run.register_model(model_name='automl_model', model_path='outputs/model.pkl') scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='outputs/scoring_explainer.pkl') ###Output _____no_output_____ ###Markdown Create the conda dependencies for setting up the serviceWe need to download the conda dependencies using the automl_run object. ###Code from azureml.automl.core.shared import constants from azureml.core.environment import Environment automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml') myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml") myenv ###Output _____no_output_____ ###Markdown Write the Entry ScriptWrite the script that will be used to predict on your model ###Code %%writefile score.py import numpy as np import pandas as pd import os import pickle import azureml.train.automl import azureml.interpret from azureml.train.automl.runtime.automl_explain_utilities import AutoMLExplainerSetupClass, \ automl_setup_model_explanations import joblib from azureml.core.model import Model def init(): global automl_model global scoring_explainer # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model automl_model_path = Model.get_model_path('automl_model') scoring_explainer_path = Model.get_model_path('scoring_explainer') automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) def run(raw_data): data = pd.read_json(raw_data, orient='records') # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, X_test=data, task='classification') # Retrieve model explanations for engineered explanations engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) # You can return any data type as long as it is JSON-serializable return {'predictions': predictions.tolist(), 'engineered_local_importance_values': engineered_local_importance_values} ###Output _____no_output_____ ###Markdown Create the InferenceConfig Create the inference config that will be used when deploying the model ###Code from azureml.core.model import InferenceConfig inf_config = InferenceConfig(entry_script='score.py', environment=myenv) ###Output _____no_output_____ ###Markdown Provision the AKS ClusterThis is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. ###Code from azureml.core.compute import ComputeTarget, AksCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. aks_name = 'scoring-explain' # Verify that cluster does not exist already try: aks_target = ComputeTarget(workspace=ws, name=aks_name) print('Found existing cluster, use it.') except ComputeTargetException: prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2') aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config) aks_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Deploy web service to AKS ###Code # Set the web service configuration (using default here) from azureml.core.webservice import AksWebservice from azureml.core.model import Model aks_config = AksWebservice.deploy_configuration() aks_service_name ='model-scoring-local-aks' aks_service = Model.deploy(workspace=ws, name=aks_service_name, models=[scoring_explainer_model, original_model], inference_config=inf_config, deployment_config=aks_config, deployment_target=aks_target) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown View the service logs ###Code aks_service.get_logs() ###Output _____no_output_____ ###Markdown Consume the web service using run method to do the scoring and explanation of scoring.We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated. ###Code # Serialize the first row of the test data into json X_test_json = X_test_df[:1].to_json(orient='records') print(X_test_json) # Call the service to get the predictions and the engineered and raw explanations output = aks_service.run(X_test_json) # Print the predicted value print('predictions:\n{}\n'.format(output['predictions'])) # Print the engineered feature importances for the predicted value print('engineered_local_importance_values:\n{}\n'.format(output['engineered_local_importance_values'])) ###Output _____no_output_____ ###Markdown Clean upDelete the service. ###Code aks_service.delete() ###Output _____no_output_____
code_examples/keras_introduction/Single_layer_by_hand.ipynb
###Markdown Building and training a single layer neural network by hand ###Code # Import packages import pandas as pd import numpy as np from ipywidgets import interact import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Defining the modelAs mentioned previously, our model is $ p_{setosa} = f( w_0 + w_1 \times width + w_2 \times length ) \qquad with \;\; f(x) = 1/(1 + e^{-x})$Let us define this using Python: ###Code def probability_setosa( petal_length, petal_width, w0, w1, w2 ): "Return the probability that a given specimen belongs to the species setosa" # Compute sum of features times weights x = w0 + w1*petal_width + w2*petal_length # Apply non-linear function: sigmoid p = 1./( 1. + np.exp( -x ) ) return( p ) ###Output _____no_output_____ ###Markdown Training the network: finding the right weights so that the model fits the training dataIn order to get a sense of what training the network implies, we will try to find the right weights *by hand*. Once, we use Keras, this process will be automated.Let us first load the data from the training set. ###Code df = pd.read_csv('./data/setosa/train.csv') df.head(10) ###Output _____no_output_____ ###Markdown We then define a function that plots the prediction of the model **for a given set of weights**, along with the training data. ###Code def plot_model( w0, w1, w2 ): "Plot the model, along with the training data." # Calculate the probability on a mesh petal_width_mesh, petal_length_mesh = \ np.meshgrid( np.linspace(0,3,100), np.linspace(0,8,100) ) p = probability_setosa( petal_width_mesh, petal_length_mesh, w0, w1, w2 ) # Plot the probability on the mesh plt.clf() plt.imshow( p.T, extent=[0,3,0,8], origin='lower', vmin=0, vmax=1, cmap='RdBu', aspect='auto', alpha=0.5 ) # Plot the data points plt.scatter( df['petal width (cm)'], df['petal length (cm)'], c=df['setosa'], cmap='RdBu') plt.xlabel('petal width (cm)') plt.ylabel('petal length (cm)') cb = plt.colorbar() cb.set_label('setosa') ###Output _____no_output_____ ###Markdown We can then use the function `interact` of `ipywidgets` to call this function with adjustable weights: ###Code interact( plot_model, w0=(-4.,5.), w1=(-2.,2.), w2=(-2., 3.)) # Optimal weights: fill these values w0 = w1 = w2 = ###Output _____no_output_____ ###Markdown Performing predictions on the test setsNow that we trained the model by finding the optimal weights for the training dataset, let us perform predictions on the test dataset.Let us first load the test set. ###Code df_test = pd.read_csv('./data/setosa/test.csv') df_test.head(10) ###Output _____no_output_____ ###Markdown We can now check the accuracy of our model on the first point for instance: ###Code probability_setosa( 4.2, 1.5, w0, w1, w2 ) ###Output _____no_output_____ ###Markdown More generally, by using pandas syntax, we can perform predictions on the whole dataset: ###Code df_test['probability_setosa_predicted'] = \ probability_setosa( df_test['petal length (cm)'], df_test['petal width (cm)'], w0, w1, w2 ) df_test ###Output _____no_output_____
notebooks/SensiML_TF_Lite.ipynb
###Markdown ###Code !pip install sensiml-dev -U import pandas as pd from sensiml import SensiML dsk = SensiML() dsk.project = 'Wakeword' dsk.pipeline = 'Tensorflow Lite Micro' pd.set_option("display.max_rows", 150) dsk.list_functions(qgrid=False).head(100) dsk.pipeline.add_feature_generator? dsk.list_queries() dsk.pipeline.reset() dsk.pipeline.set_input_query("Q1") dsk.pipeline.describe() dsk.snippets.Segmenter dsk.pipeline.features_to_tensor? ###Output _____no_output_____
notebooks/estudos_python/estudo_pandas3.ipynb
###Markdown **Agregação e agrupamento com PANDAS** ###Code #importando csv com o resultado das primarias americanas de 2016 import pandas as pd import numpy as np presults=pd.read_csv('res/primary-results.csv') print(presults.columns) print(presults['votes'].mean()) # agrupando pela coluna candidate e usando agregadores para extrair o maximo o minimo e a media de votos presults.groupby('candidate').aggregate({'votes':{min,np.mean,max}}) # fazendo o mesmo para a fracao dos votos presults.groupby('candidate').aggregate({'fraction_votes':{min,np.mean,max}}) #verificando os distritos on hillary clinton teve 100 dos votos presults[(presults['fraction_votes']==1) & (presults['candidate']=='Hillary Clinton')] # criando uma funcao de filtragem para aplicar no dataframe ela remove os cadidados com mais de 300000 votos def minimum_votes_filter(x): #print(x['candidate']) #print("----> ",x['votes'].sum()) #print(x['votes']) return x['votes'].sum() > 5500000 # apliucando a funcao no filter e agrupando os dados pela soma de votos de cada candidato pd.DataFrame(presults.groupby('candidate').filter(minimum_votes_filter)).groupby('candidate').aggregate({'votes':{sum}}) #presults.groupby('candidate')['votes'].sum() # agrupando por mais de uma coluna para saber a votacao de cada canditado em cada estado presults.groupby(['state','candidate'])['votes'].sum() ###Output _____no_output_____
Ludwig_CW3/.ipynb_checkpoints/Ludwig_skin-checkpoint.ipynb
###Markdown К сожалению, мощности моего компа не хватило на то, чтобы все заработало, надеюсь у вас все получится запустить (просто дальше не особо сложный код), поэтому все должно работать, если тренировка пройдет UPD: Тренировка не прошла даже на сайте Kaggle.com через их ноутбуки (за 9 часов так и не прошло ничего), поэтому обучаю модель без картинок ###Code # Загрузка основного датасета из метадаты data_noim = pd.DataFrame(pd.read_csv('HAM10000_metadata.csv')) data_noim.head() # создаем модель model_noim = LudwigModel(model_definition_file='model_definition_noim.yaml') # тренировка train_noim = model_noim.train(data_noim) # Создаем предсказания predictions = model_noim.predict(data_noim) predictions.head() # красивые графики обучения модели learning_curves(train_noim, 'localization') ###Output _____no_output_____
12_Training_Probabilistic_Graphical_Models_hw.ipynb
###Markdown Before you begin, execute this cell to import numpy and packages from the D-Wave Ocean suite, and all necessary functions the gate-model framework you are going to use, whether that is the Forest SDK or Qiskit. In the case of Forest SDK, it also starts the qvm and quilc servers. ###Code %run -i "assignment_helper.py" ###Output Available frameworks: Forest SDK Qiskit D-Wave Ocean ###Markdown Probabilistic graphical modelsRecall that probabilistic graphical models capture a compact representation of a joint probability distribution through conditionally independence: random variable $X$ is conditionally independent of $Y$ given $Z$ $(X\perp Y|Z)$, if $P(X=x, Y=y|Z=z) = P(X=x|Z=z)P(Y=y|Z=z)$ for all $x\in X,y\in Y,z\in Z$. A Markov network is a type of probabilistic graphical models that allows cycles in the graph and uses global normalization of probabilities (i.e. a partition function). The factorization of the joint probability distribution is given as a sum $P(X_1, \ldots, X_N) = \frac{1}{Z}\exp(-\sum_k E[C_k])$, where $C_k$ are are cliques of the graph, and $E[.]$ is an energy defined over the cliques.**Exercise 1** (2 points). Define a Markov random field of four binary random variables in `dimod`. Random variables $X_1$ and $X_3$ are conditionally independent given $X_2$. The random variable $X_4$ is independent of all the other variables. The coupling strength on all edges in the graph is -1. Apart from the coupling between nodes, we also consider an external field of strength 1 applied to all nodes. Store the resulting `BinaryQuadraticModel` in an object called `model`. ###Code ### ### YOUR CODE HERE ### import dimod n_spins = 4 h = {v: 1 for v in range(n_spins)} J = {(0, 1): -1, (1, 2): -1} model = dimod.BinaryQuadraticModel(h, J, 0.0, dimod.BINARY) assert isinstance(model, dimod.binary_quadratic_model.BinaryQuadraticModel) assert model.vartype == dimod.BINARY assert len(model.variables) == 4 assert [i for i in model.linear] == [0, 1, 2, 3] assert [i for i in model.linear.values()] == [1, 1, 1, 1] assert [i for i in model.quadratic] == [(0, 1), (1, 2)] assert [i for i in model.quadratic.values()] == [-1, -1] ###Output _____no_output_____ ###Markdown **Exercise 2** (2 points). Convert the `dimod` model to a `networkx` graph. Store it in an object called `G`. You can use the `add_nodes_from` and `add_edges_from` methods of the graph object and the `linear` and `quadratic` methods of the model object to construct the graph. ###Code #import networkx #G = networkx.Graph() ### ### YOUR CODE HERE ### G = model.to_networkx_graph() assert list(G.nodes) == [0, 1, 2, 3] assert list(G.edges) == [(0,1), (1, 2)] ###Output _____no_output_____ ###Markdown Now you can easily plot the Markov network: ###Code %matplotlib inline networkx.draw(G) ###Output _____no_output_____ ###Markdown **Exercise 3** (1 point). If we want to use quantum annealing to draw samples, we have to address the connectivity structure on the chip. Embed the graph on a single Chimera cell using `minorminer`. ###Code connectivity_structure = dwave_networkx.chimera_graph(1, 1) ### ### YOUR CODE HERE ### from minorminer import find_embedding embedded_graph = find_embedding(G.edges(), connectivity_structure.edges()) assert type(embedded_graph) == dict assert len(embedded_graph) == 3 ###Output _____no_output_____ ###Markdown This is a very simple Markov network that does not need multiple physical qubits to represent a logical qubit. Note that the independent random variable $X_4$ does not appear in the embedding. ###Code dwave_networkx.draw_chimera_embedding(connectivity_structure, embedded_graph) ###Output _____no_output_____ ###Markdown **Exercise 4** (2 points). Estimate the partition function of this model at temperature $T=1$ from 100 samples. Store the value in a variable called `Z`. ###Code ### ### YOUR CODE HERE ### sampler = dimod.SimulatedAnnealingSampler() temperature = 1 response = sampler.sample(model, beta_range=[1/temperature, 1/temperature], num_reads=100) degen = {} # dictionary that associate to each energy E the degeneracy g[E] for solution in response.data(): #aggregate().data() if solution.energy in degen.keys(): degen[solution.energy] += 1 else: degen[solution.energy] = 1 print("Degeneracy", degen) probabilities = np.array([degen[E] * np.exp(-E/temperature) for E in degen.keys()]) Z = probabilities.sum() ### ### AUTOGRADER TEST - DO NOT REMOVE ### ###Output _____no_output_____
text_mining/Language_Classification.ipynb
###Markdown Read German and English articles into different variables ###Code count = 1 german_strs = [] english_strs = [] with open("english_german_articles.txt","r",encoding="utf-8") as f: for line in f.readlines(): if(count <= 90): german_strs.append(line) else: english_strs.append(line) count = count + 1 print(len(german_strs)) print(len(english_strs)) splitter = ", \"name\": " print(splitter) ###Output , "name": ###Markdown Gather English and German stopwords ###Code en_stop_words = stopwords.words('english') ge_stop_words = stopwords.words('german') stop_words = ge_stop_words + en_stop_words print(len(stop_words)," ",stop_words) ge_labels = ['ge'] * len(german_strs) en_labels = ['en'] * len(english_strs) #scratchpad -ignore var1 = 'master of \u00d6 \u0103 \u00e4 \u017e \u00a0 the' print(repr(var1)) print(str(var1)) #scratchpad -ignore text=u"""Europython 2005 G\u00f6teborg, Sweden \u8463\u5049\u696d Hotel rates 100\N{euro sign} """ import codecs def printu(ustr): print(ustr.encode('raw_unicode_escape')) def saveu(ustr, filename='output2.txt'): open(filename,'wb').write(codecs.BOM_UTF8 + ustr.encode('utf8')) saveu(text) all_articles = german_strs + english_strs all_labels = ge_labels + en_labels ###Output _____no_output_____ ###Markdown Create DataFrame with all articles and labels ###Code all_df = pd.DataFrame({"article":all_articles,'label':all_labels}) print(all_df.head()) print(all_df.tail()) rand_no = 12345 score_param = 'accuracy' all_df = all_df.sample(frac=1,random_state=rand_no) print(all_df.shape) ###Output (180, 2) ###Markdown Split data into 72:18:10 ratios for train:validation:test datasets ###Code X_train,X_test,y_train,y_test = train_test_split(all_df['article'],all_df['label'],test_size = 0.1, random_state= rand_no) X_train,X_val,y_train,y_val = train_test_split(X_train,y_train,test_size = 0.2, random_state= rand_no) print(X_train.shape," - ", y_train.shape) print(X_val.shape ," - ", y_val.shape) print(X_test.shape," - ", y_test.shape) ###Output (129,) - (129,) (33,) - (33,) (18,) - (18,) ###Markdown Vectorize the datasets ###Code vectorizer = TfidfVectorizer( ngram_range=(1, 4), stop_words=stop_words, min_df=8, max_df=50, lowercase=True, strip_accents='ascii') X_train_matrix = vectorizer.fit_transform(X_train).toarray() X_val_matrix = vectorizer.transform(X_val).toarray() X_test_matrix = vectorizer.transform(X_test).toarray() print(X_train_matrix.shape) print(X_val_matrix.shape) print(X_test_matrix.shape) print(vectorizer.idf_.shape) ###Output (129, 1371) (33, 1371) (18, 1371) (1371,) ###Markdown Utility function to fit the model, predict metrics ###Code models = [] scores = [] train_preds = None val_preds= None #main method to fit a model,do predictions and print metrics def predictModelMetrics(model,model_name): train_preds,val_preds = resetPreds() train_preds,val_preds = doPredict(model,model_name) printMetrics(model,train_preds,val_preds) #reset prediction variables before running next model def resetPreds(): train_preds = None val_preds = None return train_preds,val_preds #make predictions def doPredict(model,model_name): model.fit(X_train_matrix,y_train) train_preds = model.predict(X_train_matrix) val_preds = model.predict(X_val_matrix) models.append(model_name) return train_preds,val_preds #prints and captures metrics def printMetrics(model,train_preds,val_preds): print("Train Accuracy: %.2f"%accuracy_score(y_train,train_preds)) #k-fold with k=5 val_scores = cross_val_score(model,X_val_matrix,y_val, cv=5, scoring=score_param) cv_score = val_scores.mean() print("Cross Validation Accuracy: %.2f" %cv_score) print("-------------------------------------") print("Confusion Matrix:") print(pd.crosstab(y_val,val_preds,rownames=['Actual'],colnames=['Predicted'],margins=True)) #print(classification_report(y_val,val_preds)) scores.append(round(cv_score,2)) #print summary of model accuracies def summarizeResults(): results = pd.DataFrame({"Model":models, "Score":scores}) print(results) #Scratchpad -ignore val1 = None val2 = None def init(): val1 = [10,20,30] val2 = [100,200,300] return val1,val2 val1,val2 = init() print(val1, val2) ###Output [10, 20, 30] [100, 200, 300] ###Markdown Naive Bayes ###Code nbayes = GaussianNB() predictModelMetrics(nbayes,"Naive Bayes") ###Output Train Accuracy: 0.99 Cross Validation Accuracy: 0.97 ------------------------------------- Confusion Matrix: Predicted en ge All Actual en 17 1 18 ge 0 15 15 All 17 16 33 ###Markdown Logistic Regression ###Code logistic_reg = LogisticRegression(solver='liblinear',penalty='l1') predictModelMetrics(logistic_reg, "Logistic Regression") ###Output Train Accuracy: 0.92 Cross Validation Accuracy: 0.67 ------------------------------------- Confusion Matrix: Predicted en ge All Actual en 17 1 18 ge 2 13 15 All 19 14 33 ###Markdown Finding optimal k value ###Code mean_errors =[] k_range = range(1,50) for i in k_range: knn = KNeighborsClassifier(n_neighbors=i) knn.fit(X_train_matrix,y_train) pred_i = knn.predict(X_val_matrix) mean_errors.append(np.mean(y_val != pred_i)) plt.figure(figsize=(12,6)) plt.title("Error rate change with K") plt.plot(k_range,mean_errors,marker='o') plt.xlabel("K value") plt.ylabel("Mean Error"); ###Output _____no_output_____ ###Markdown k=3 is the better value with high decrease in error ###Code knn = KNeighborsClassifier(n_neighbors=5) predictModelMetrics(knn,"KNN") ###Output Train Accuracy: 0.98 Cross Validation Accuracy: 0.91 ------------------------------------- Confusion Matrix: Predicted en ge All Actual en 17 1 18 ge 2 13 15 All 19 14 33 ###Markdown SVC Model ###Code param_grid = {"C":np.arange(0.001,2,0.1), "gamma":np.arange(0.001,2,0.1), "kernel":['linear','rbf','sigmoid']} svc = SVC(C=1, gamma=1, kernel='rbf') random_cv = RandomizedSearchCV( svc, cv=10, param_distributions=param_grid, n_iter=10, scoring=score_param,iid= False) predictModelMetrics(random_cv,"SVC") random_cv.best_params_ #manually tune parameters as GridSearch is taking lot of time xgbc = XGBClassifier(max_depth=5,learning_rate=0.01,n_estimators=600) predictModelMetrics(xgbc,"XGBoost Classifier") summarizeResults() ###Output Model Score 0 Naive Bayes 0.97 1 Logistic Regression 0.67 2 KNN 0.91 3 SVC 0.85 4 XGBoost Classifier 0.76 ###Markdown Run the KNN which gives high Cross Validation accuracy on test data ###Code test_preds = knn.predict(X_test_matrix) #cv_score = cross_val_score(knn,X_test_matrix,y_test,scoring='accuracy',cv=5).mean() #print("CV on test:%.2f"%cv_score) test_score = accuracy_score(y_test,test_preds) print("Accuracy Score on test:%.2f"%test_score) ###Output Accuracy Score on test:1.00
docs/source/notebooks/GLM-logistic.ipynb
###Markdown GLM: Logistic Regression* This is a reproduction with a few slight alterations of [Bayesian Log Reg](http://jbencook.github.io/portfolio/bayesian_logistic_regression.html) by J. Benjamin Cook* Author: Peadar Coyle and J. Benjamin Cook* How likely am I to make more than $50,000 US Dollars?* Exploration of model selection techniques too - I use WAIC to select the best model. * The convenience functions are all taken from Jon Sedars work.* This example also has some explorations of the features so serves as a good example of Exploratory Data Analysis and how that can guide the model creation/ model selection process. ###Code %matplotlib inline import pandas as pd import numpy as np import pymc3 as pm import matplotlib.pyplot as plt import seaborn import warnings warnings.filterwarnings('ignore') from collections import OrderedDict from time import time import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.optimize import fmin_powell from scipy import integrate import theano as thno import theano.tensor as T def run_models(df, upper_order=5): ''' Convenience function: Fit a range of pymc3 models of increasing polynomial complexity. Suggest limit to max order 5 since calculation time is exponential. ''' models, traces = OrderedDict(), OrderedDict() for k in range(1,upper_order+1): nm = 'k{}'.format(k) fml = create_poly_modelspec(k) with pm.Model() as models[nm]: print('\nRunning: {}'.format(nm)) pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Normal()) traces[nm] = pm.sample(2000, chains=1, init=None, tune=1000) return models, traces def plot_traces(traces, retain=1000): ''' Convenience function: Plot traces with overlaid means and values ''' ax = pm.traceplot(traces[-retain:], figsize=(12,len(traces.varnames)*1.5), lines={k: v['mean'] for k, v in pm.summary(traces[-retain:]).iterrows()}) for i, mn in enumerate(pm.summary(traces[-retain:])['mean']): ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data' ,xytext=(5,10), textcoords='offset points', rotation=90 ,va='bottom', fontsize='large', color='#AA0022') def create_poly_modelspec(k=1): ''' Convenience function: Create a polynomial modelspec string for patsy ''' return ('income ~ educ + hours + age ' + ' '.join(['+ np.power(age,{})'.format(j) for j in range(2,k+1)])).strip() ###Output _____no_output_____ ###Markdown The [Adult Data Set](http://archive.ics.uci.edu/ml/datasets/Adult) is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \\$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.The motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression. ###Code data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", header=None, names=['age', 'workclass', 'fnlwgt', 'education-categorical', 'educ', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'captial-gain', 'capital-loss', 'hours', 'native-country', 'income']) data.head(10) ###Output _____no_output_____ ###Markdown Scrubbing and cleaningWe need to remove any null entries in Income. And we also want to restrict this study to the United States. ###Code data = data[~pd.isnull(data['income'])] data[data['native-country']==" United-States"] income = 1 * (data['income'] == " >50K") age2 = np.square(data['age']) data = data[['age', 'educ', 'hours']] data['age2'] = age2 data['income'] = income income.value_counts() ###Output _____no_output_____ ###Markdown Exploring the data Let us get a feel for the parameters. * We see that age is a tailed distribution. Certainly not Gaussian!* We don't see much of a correlation between many of the features, with the exception of Age and Age2. * Hours worked has some interesting behaviour. How would one describe this distribution? ###Code g = seaborn.pairplot(data) # Compute the correlation matrix corr = data.corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = seaborn.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio seaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, linewidths=.5, cbar_kws={"shrink": .5}, ax=ax) ###Output _____no_output_____ ###Markdown We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income (which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering). The modelWe will use a simple model, which assumes that the probability of making more than $50K is a function of age, years of education and hours worked per week. We will use PyMC3 do inference. In Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters(in this case the regression coefficients)The posterior is equal to the likelihood $$p(\theta | D) = \frac{p(D|\theta)p(\theta)}{p(D)}$$Because the denominator is a notoriously difficult integral, $p(D) = \int p(D | \theta) p(\theta) d \theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity. What this means in practice is that we only need to worry about the numerator. Getting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.The likelihood is the product of n Bernoulli trials, $\prod^{n}_{i=1} p_{i}^{y} (1 - p_{i})^{1-y_{i}}$,where $p_i = \frac{1}{1 + e^{-z_i}}$, $z_{i} = \beta_{0} + \beta_{1}(age)_{i} + \beta_2(age)^{2}_{i} + \beta_{3}(educ)_{i} + \beta_{4}(hours)_{i}$ and $y_{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise. With the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters. ###Code with pm.Model() as logistic_model: pm.glm.GLM.from_formula('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial()) trace_logistic_model = pm.sample(2000, chains=1, tune=1000) plot_traces(trace_logistic_model, retain=1000) ###Output _____no_output_____ ###Markdown Some results One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.I'll use seaborn to look at the distribution of some of these factors. ###Code plt.figure(figsize=(9,7)) trace = trace_logistic_model[1000:] seaborn.jointplot(trace['age'], trace['educ'], kind="hex", color="#4CB391") plt.xlabel("beta_age") plt.ylabel("beta_educ") plt.show() ###Output _____no_output_____ ###Markdown So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school). ###Code # Linear model with hours == 50 and educ == 12 lm = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*12 + samples['hours']*50))) # Linear model with hours == 50 and educ == 16 lm2 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*16 + samples['hours']*50))) # Linear model with hours == 50 and educ == 19 lm3 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*19 + samples['hours']*50))) ###Output _____no_output_____ ###Markdown Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values. ###Code # Plot the posterior predictive distributions of P(income > $50K) vs. age pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15) import matplotlib.lines as mlines blue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education') green_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors') red_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School') plt.legend(handles=[blue_line, green_line, red_line], loc='lower right') plt.ylabel("P(Income > $50K)") plt.xlabel("Age") plt.show() b = trace['educ'] plt.hist(np.exp(b), bins=20, normed=True) plt.xlabel("Odds Ratio") plt.show() ###Output _____no_output_____ ###Markdown Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval! ###Code lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5) print("P(%.3f < O.R. < %.3f) = 0.95"%(np.exp(3*lb),np.exp(3*ub))) ###Output P(2.617 < O.R. < 2.824) = 0.95 ###Markdown Model selection One question that was immediately asked was what effect does age have on the model, and why should it be $age^2$ versus age? We'll run the model with a few changes to see what effect higher order terms have on this model in terms of WAIC. ###Code models_lin, traces_lin = run_models(data, 4) dfwaic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin']) dfwaic.index.name = 'model' for nm in dfwaic.index: dfwaic.loc[nm, 'lin'] = pm.waic(traces_lin[nm],models_lin[nm])[0] dfwaic = pd.melt(dfwaic.reset_index(), id_vars=['model'], var_name='poly', value_name='waic') g = seaborn.factorplot(x='model', y='waic', col='poly', hue='poly', data=dfwaic, kind='bar', size=6) ###Output _____no_output_____ ###Markdown GLM: Logistic Regression* This is a reproduction with a few slight alterations of [Bayesian Log Reg](http://jbencook.github.io/portfolio/bayesian_logistic_regression.html) by J. Benjamin Cook* Author: Peadar Coyle and J. Benjamin Cook* How likely am I to make more than $50,000 US Dollars?* Exploration of model selection techniques too - I use WAIC to select the best model. * The convenience functions are all taken from Jon Sedars work.* This example also has some explorations of the features so serves as a good example of Exploratory Data Analysis and how that can guide the model creation/ model selection process. ###Code import warnings from collections import OrderedDict from time import time import arviz as az import matplotlib.pyplot as plt import numpy as np import pandas as pd import pymc3 as pm import seaborn import theano as thno import theano.tensor as T from scipy import integrate from scipy.optimize import fmin_powell print('Running on PyMC3 v{}'.format(pm.__version__)) %config InlineBackend.figure_format = 'retina' warnings.filterwarnings('ignore') az.style.use('arviz-darkgrid') def run_models(df, upper_order=5): ''' Convenience function: Fit a range of pymc3 models of increasing polynomial complexity. Suggest limit to max order 5 since calculation time is exponential. ''' models, traces = OrderedDict(), OrderedDict() for k in range(1,upper_order+1): nm = 'k{}'.format(k) fml = create_poly_modelspec(k) with pm.Model() as models[nm]: print('\nRunning: {}'.format(nm)) pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Binomial()) traces[nm] = pm.sample(1000, tune=1000, init='adapt_diag') return models, traces def plot_traces(traces, retain=0): ''' Convenience function: Plot traces with overlaid means and values ''' ax = pm.traceplot(traces[-retain:], lines=tuple([(k, {}, v['mean']) for k, v in pm.summary(traces[-retain:]).iterrows()])) for i, mn in enumerate(pm.summary(traces[-retain:])['mean']): ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data' ,xytext=(5,10), textcoords='offset points', rotation=90 ,va='bottom', fontsize='large', color='#AA0022') def create_poly_modelspec(k=1): ''' Convenience function: Create a polynomial modelspec string for patsy ''' return ('income ~ educ + hours + age ' + ' '.join(['+ np.power(age,{})'.format(j) for j in range(2,k+1)])).strip() ###Output _____no_output_____ ###Markdown The [Adult Data Set](http://archive.ics.uci.edu/ml/datasets/Adult) is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \\$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.The motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression. ###Code raw_data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", header=None, names=['age', 'workclass', 'fnlwgt', 'education-categorical', 'educ', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'captial-gain', 'capital-loss', 'hours', 'native-country', 'income']) raw_data.head(10) ###Output _____no_output_____ ###Markdown Scrubbing and cleaningWe need to remove any null entries in Income. And we also want to restrict this study to the United States. ###Code data = raw_data[~pd.isnull(raw_data['income'])] data[data['native-country']==" United-States"].sample(5) income = 1 * (data['income'] == " >50K") data = data[['age', 'educ', 'hours']] # Scale age by 10, it helps with model convergence. data['age'] = data['age']/10. data['age2'] = np.square(data['age']) data['income'] = income income.value_counts() ###Output _____no_output_____ ###Markdown Exploring the data Let us get a feel for the parameters. * We see that age is a tailed distribution. Certainly not Gaussian!* We don't see much of a correlation between many of the features, with the exception of Age and Age2. * Hours worked has some interesting behaviour. How would one describe this distribution? ###Code g = seaborn.pairplot(data) # Compute the correlation matrix corr = data.corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = seaborn.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio seaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, linewidths=.5, cbar_kws={"shrink": .5}, ax=ax); ###Output _____no_output_____ ###Markdown We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income (which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering). The modelWe will use a simple model, which assumes that the probability of making more than $50K is a function of age, years of education and hours worked per week. We will use PyMC3 do inference. In Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters(in this case the regression coefficients)The posterior is equal to the likelihood $$p(\theta | D) = \frac{p(D|\theta)p(\theta)}{p(D)}$$Because the denominator is a notoriously difficult integral, $p(D) = \int p(D | \theta) p(\theta) d \theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity. What this means in practice is that we only need to worry about the numerator. Getting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.The likelihood is the product of n Bernoulli trials, $\prod^{n}_{i=1} p_{i}^{y} (1 - p_{i})^{1-y_{i}}$,where $p_i = \frac{1}{1 + e^{-z_i}}$, $z_{i} = \beta_{0} + \beta_{1}(age)_{i} + \beta_2(age)^{2}_{i} + \beta_{3}(educ)_{i} + \beta_{4}(hours)_{i}$ and $y_{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise. With the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters. ###Code with pm.Model() as logistic_model: pm.glm.GLM.from_formula('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial()) trace = pm.sample(1000, tune=1000, init='adapt_diag') plot_traces(trace); ###Output _____no_output_____ ###Markdown Some results One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.I'll use seaborn to look at the distribution of some of these factors. ###Code plt.figure(figsize=(9,7)) seaborn.jointplot(trace['age'], trace['educ'], kind="hex", color="#4CB391") plt.xlabel("beta_age") plt.ylabel("beta_educ"); ###Output _____no_output_____ ###Markdown So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school). ###Code def lm_full(trace, age, educ, hours): shape = np.broadcast(age, educ, hours).shape x_norm = np.asarray([np.broadcast_to(x, shape) for x in [age/10., educ, hours]]) return 1 / (1 + np.exp(-(trace['Intercept'] + trace['age']*x_norm[0] + trace['age2']*(x_norm[0]**2) + trace['educ']*x_norm[1] + trace['hours']*x_norm[2]))) # Linear model with hours == 50 and educ == 12 lm = lambda x, samples: lm_full(samples, x, 12., 50.) # Linear model with hours == 50 and educ == 16 lm2 = lambda x, samples: lm_full(samples, x, 16., 50.) # Linear model with hours == 50 and educ == 19 lm3 = lambda x, samples: lm_full(samples, x, 19., 50.) ###Output _____no_output_____ ###Markdown Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values. ###Code # Plot the posterior predictive distributions of P(income > $50K) vs. age pm.plot_posterior_predictive_glm(trace, eval=np.linspace( 25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace( 25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace( 25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15) import matplotlib.lines as mlines blue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education') green_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors') red_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School') plt.legend(handles=[blue_line, green_line, red_line], loc='lower right') plt.ylabel("P(Income > $50K)") plt.xlabel("Age") plt.show() b = trace['educ'] plt.hist(np.exp(b), bins=20, normed=True) plt.xlabel("Odds Ratio") plt.show() ###Output _____no_output_____ ###Markdown Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval! ###Code lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5) print("P(%.3f < O.R. < %.3f) = 0.95" % (np.exp(lb),np.exp(ub))) ###Output P(1.377 < O.R. < 1.414) = 0.95 ###Markdown Model selection One question that was immediately asked was what effect does age have on the model, and why should it be $age^2$ versus age? We'll run the model with a few changes to see what effect higher order terms have on this model in terms of WAIC. ###Code models_lin, traces_lin = run_models(data, 3) model_trace_dict = dict() for nm in ['k1', 'k2', 'k3']: models_lin[nm].name = nm model_trace_dict.update({models_lin[nm]: traces_lin[nm]}) dfwaic = pm.compare(model_trace_dict, ic='WAIC') pm.compareplot(dfwaic); ###Output _____no_output_____ ###Markdown WAIC confirms our decision to use age^2. ###Code %load_ext watermark %watermark -n -u -v -iv -w ###Output pymc3 3.8 arviz 0.8.3 numpy 1.17.5 last updated: Thu Jun 11 2020 CPython 3.8.2 IPython 7.11.0 watermark 2.0.2 ###Markdown GLM: Logistic Regression* This is a reproduction with a few slight alterations of [Bayesian Log Reg](http://jbencook.github.io/portfolio/bayesian_logistic_regression.html) by J. Benjamin Cook* Author: Peadar Coyle and J. Benjamin Cook* How likely am I to make more than $50,000 US Dollars?* Exploration of model selection techniques too - I use WAIC to select the best model. * The convenience functions are all taken from Jon Sedars work.* This example also has some explorations of the features so serves as a good example of Exploratory Data Analysis and how that can guide the model creation/ model selection process. ###Code import arviz as az import matplotlib.pyplot as plt import numpy as np import pandas as pd import pymc3 as pm import seaborn import theano as thno import theano.tensor as T import warnings from collections import OrderedDict from scipy.optimize import fmin_powell from scipy import integrate from time import time print('Running on PyMC3 v{}'.format(pm.__version__)) %config InlineBackend.figure_format = 'retina' warnings.filterwarnings('ignore') az.style.use('arviz-darkgrid') def run_models(df, upper_order=5): ''' Convenience function: Fit a range of pymc3 models of increasing polynomial complexity. Suggest limit to max order 5 since calculation time is exponential. ''' models, traces = OrderedDict(), OrderedDict() for k in range(1,upper_order+1): nm = 'k{}'.format(k) fml = create_poly_modelspec(k) with pm.Model() as models[nm]: print('\nRunning: {}'.format(nm)) pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Binomial()) traces[nm] = pm.sample(1000, tune=1000, init='adapt_diag') return models, traces def plot_traces(traces, retain=0): ''' Convenience function: Plot traces with overlaid means and values ''' ax = pm.traceplot(traces[-retain:], lines=tuple([(k, {}, v['mean']) for k, v in pm.summary(traces[-retain:]).iterrows()])) for i, mn in enumerate(pm.summary(traces[-retain:])['mean']): ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data' ,xytext=(5,10), textcoords='offset points', rotation=90 ,va='bottom', fontsize='large', color='#AA0022') def create_poly_modelspec(k=1): ''' Convenience function: Create a polynomial modelspec string for patsy ''' return ('income ~ educ + hours + age ' + ' '.join(['+ np.power(age,{})'.format(j) for j in range(2,k+1)])).strip() ###Output _____no_output_____ ###Markdown The [Adult Data Set](http://archive.ics.uci.edu/ml/datasets/Adult) is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \\$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.The motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression. ###Code raw_data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", header=None, names=['age', 'workclass', 'fnlwgt', 'education-categorical', 'educ', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'captial-gain', 'capital-loss', 'hours', 'native-country', 'income']) raw_data.head(10) ###Output _____no_output_____ ###Markdown Scrubbing and cleaningWe need to remove any null entries in Income. And we also want to restrict this study to the United States. ###Code data = raw_data[~pd.isnull(raw_data['income'])] data[data['native-country']==" United-States"].sample(5) income = 1 * (data['income'] == " >50K") data = data[['age', 'educ', 'hours']] # Scale age by 10, it helps with model convergence. data['age'] = data['age']/10. data['age2'] = np.square(data['age']) data['income'] = income income.value_counts() ###Output _____no_output_____ ###Markdown Exploring the data Let us get a feel for the parameters. * We see that age is a tailed distribution. Certainly not Gaussian!* We don't see much of a correlation between many of the features, with the exception of Age and Age2. * Hours worked has some interesting behaviour. How would one describe this distribution? ###Code g = seaborn.pairplot(data) # Compute the correlation matrix corr = data.corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = seaborn.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio seaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, linewidths=.5, cbar_kws={"shrink": .5}, ax=ax); ###Output _____no_output_____ ###Markdown We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income (which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering). The modelWe will use a simple model, which assumes that the probability of making more than $50K is a function of age, years of education and hours worked per week. We will use PyMC3 do inference. In Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters(in this case the regression coefficients)The posterior is equal to the likelihood $$p(\theta | D) = \frac{p(D|\theta)p(\theta)}{p(D)}$$Because the denominator is a notoriously difficult integral, $p(D) = \int p(D | \theta) p(\theta) d \theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity. What this means in practice is that we only need to worry about the numerator. Getting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.The likelihood is the product of n Bernoulli trials, $\prod^{n}_{i=1} p_{i}^{y} (1 - p_{i})^{1-y_{i}}$,where $p_i = \frac{1}{1 + e^{-z_i}}$, $z_{i} = \beta_{0} + \beta_{1}(age)_{i} + \beta_2(age)^{2}_{i} + \beta_{3}(educ)_{i} + \beta_{4}(hours)_{i}$ and $y_{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise. With the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters. ###Code with pm.Model() as logistic_model: pm.glm.GLM.from_formula('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial()) trace = pm.sample(1000, tune=1000, init='adapt_diag') plot_traces(trace); ###Output _____no_output_____ ###Markdown Some results One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.I'll use seaborn to look at the distribution of some of these factors. ###Code plt.figure(figsize=(9,7)) seaborn.jointplot(trace['age'], trace['educ'], kind="hex", color="#4CB391") plt.xlabel("beta_age") plt.ylabel("beta_educ"); ###Output _____no_output_____ ###Markdown So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school). ###Code def lm_full(trace, age, educ, hours): shape = np.broadcast(age, educ, hours).shape x_norm = np.asarray([np.broadcast_to(x, shape) for x in [age/10., educ, hours]]) return 1 / (1 + np.exp(-(trace['Intercept'] + trace['age']*x_norm[0] + trace['age2']*(x_norm[0]**2) + trace['educ']*x_norm[1] + trace['hours']*x_norm[2]))) # Linear model with hours == 50 and educ == 12 lm = lambda x, samples: lm_full(samples, x, 12., 50.) # Linear model with hours == 50 and educ == 16 lm2 = lambda x, samples: lm_full(samples, x, 16., 50.) # Linear model with hours == 50 and educ == 19 lm3 = lambda x, samples: lm_full(samples, x, 19., 50.) ###Output _____no_output_____ ###Markdown Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values. ###Code # Plot the posterior predictive distributions of P(income > $50K) vs. age pm.plot_posterior_predictive_glm(trace, eval=np.linspace( 25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace( 25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace( 25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15) import matplotlib.lines as mlines blue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education') green_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors') red_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School') plt.legend(handles=[blue_line, green_line, red_line], loc='lower right') plt.ylabel("P(Income > $50K)") plt.xlabel("Age") plt.show() b = trace['educ'] plt.hist(np.exp(b), bins=20, normed=True) plt.xlabel("Odds Ratio") plt.show() ###Output _____no_output_____ ###Markdown Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval! ###Code lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5) print("P(%.3f < O.R. < %.3f) = 0.95" % (np.exp(lb),np.exp(ub))) ###Output P(1.377 < O.R. < 1.414) = 0.95 ###Markdown Model selection One question that was immediately asked was what effect does age have on the model, and why should it be $age^2$ versus age? We'll run the model with a few changes to see what effect higher order terms have on this model in terms of WAIC. ###Code models_lin, traces_lin = run_models(data, 3) model_trace_dict = dict() for nm in ['k1', 'k2', 'k3']: models_lin[nm].name = nm model_trace_dict.update({models_lin[nm]: traces_lin[nm]}) dfwaic = pm.compare(model_trace_dict, ic='WAIC') pm.compareplot(dfwaic); ###Output _____no_output_____ ###Markdown WAIC confirms our decision to use age^2. ###Code %load_ext watermark %watermark -n -u -v -iv -w ###Output pymc3 3.8 arviz 0.8.3 numpy 1.17.5 last updated: Thu Jun 11 2020 CPython 3.8.2 IPython 7.11.0 watermark 2.0.2 ###Markdown GLM: Logistic Regression* This is a reproduction with a few slight alterations of [Bayesian Log Reg](http://jbencook.github.io/portfolio/bayesian_logistic_regression.html) by J. Benjamin Cook* Author: Peadar Coyle and J. Benjamin Cook* How likely am I to make more than $50,000 US Dollars?* Exploration of model selection techniques too - I use WAIC to select the best model. * The convenience functions are all taken from Jon Sedars work.* This example also has some explorations of the features so serves as a good example of Exploratory Data Analysis and how that can guide the model creation/ model selection process. ###Code %matplotlib inline import pandas as pd import numpy as np import pymc3 as pm import matplotlib.pyplot as plt import seaborn import warnings warnings.filterwarnings('ignore') from collections import OrderedDict from time import time import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.optimize import fmin_powell from scipy import integrate import theano as thno import theano.tensor as T def run_models(df, upper_order=5): ''' Convenience function: Fit a range of pymc3 models of increasing polynomial complexity. Suggest limit to max order 5 since calculation time is exponential. ''' models, traces = OrderedDict(), OrderedDict() for k in range(1,upper_order+1): nm = 'k{}'.format(k) fml = create_poly_modelspec(k) with pm.Model() as models[nm]: print('\nRunning: {}'.format(nm)) pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Normal()) traces[nm] = pm.sample(2000, chains=1, init=None, tune=1000) return models, traces def plot_traces(traces, retain=1000): ''' Convenience function: Plot traces with overlaid means and values ''' ax = pm.traceplot(traces[-retain:], figsize=(12,len(traces.varnames)*1.5), lines={k: v['mean'] for k, v in pm.summary(traces[-retain:]).iterrows()}) for i, mn in enumerate(pm.summary(traces[-retain:])['mean']): ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data' ,xytext=(5,10), textcoords='offset points', rotation=90 ,va='bottom', fontsize='large', color='#AA0022') def create_poly_modelspec(k=1): ''' Convenience function: Create a polynomial modelspec string for patsy ''' return ('income ~ educ + hours + age ' + ' '.join(['+ np.power(age,{})'.format(j) for j in range(2,k+1)])).strip() ###Output _____no_output_____ ###Markdown The [Adult Data Set](http://archive.ics.uci.edu/ml/datasets/Adult) is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \\$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.The motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression. ###Code data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", header=None, names=['age', 'workclass', 'fnlwgt', 'education-categorical', 'educ', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'captial-gain', 'capital-loss', 'hours', 'native-country', 'income']) data.head(10) ###Output _____no_output_____ ###Markdown Scrubbing and cleaningWe need to remove any null entries in Income. And we also want to restrict this study to the United States. ###Code data = data[~pd.isnull(data['income'])] data[data['native-country']==" United-States"] income = 1 * (data['income'] == " >50K") age2 = np.square(data['age']) data = data[['age', 'educ', 'hours']] data['age2'] = age2 data['income'] = income income.value_counts() ###Output _____no_output_____ ###Markdown Exploring the data Let us get a feel for the parameters. * We see that age is a tailed distribution. Certainly not Gaussian!* We don't see much of a correlation between many of the features, with the exception of Age and Age2. * Hours worked has some interesting behaviour. How would one describe this distribution? ###Code g = seaborn.pairplot(data) # Compute the correlation matrix corr = data.corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = seaborn.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio seaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, linewidths=.5, cbar_kws={"shrink": .5}, ax=ax) ###Output _____no_output_____ ###Markdown We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income (which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering). The modelWe will use a simple model, which assumes that the probability of making more than $50K is a function of age, years of education and hours worked per week. We will use PyMC3 do inference. In Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters(in this case the regression coefficients)The posterior is equal to the likelihood $$p(\theta | D) = \frac{p(D|\theta)p(\theta)}{p(D)}$$Because the denominator is a notoriously difficult integral, $p(D) = \int p(D | \theta) p(\theta) d \theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity. What this means in practice is that we only need to worry about the numerator. Getting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.The likelihood is the product of n Bernoulli trials, $\prod^{n}_{i=1} p_{i}^{y} (1 - p_{i})^{1-y_{i}}$,where $p_i = \frac{1}{1 + e^{-z_i}}$, $z_{i} = \beta_{0} + \beta_{1}(age)_{i} + \beta_2(age)^{2}_{i} + \beta_{3}(educ)_{i} + \beta_{4}(hours)_{i}$ and $y_{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise. With the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters. ###Code with pm.Model() as logistic_model: pm.glm.GLM.from_formula('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial()) trace_logistic_model = pm.sample(2000, chains=1, init=None, tune=1000) plot_traces(trace_logistic_model, retain=1000) ###Output _____no_output_____ ###Markdown Some results One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.I'll use seaborn to look at the distribution of some of these factors. ###Code plt.figure(figsize=(9,7)) trace = trace_logistic_model[1000:] seaborn.jointplot(trace['age'], trace['educ'], kind="hex", color="#4CB391") plt.xlabel("beta_age") plt.ylabel("beta_educ") plt.show() ###Output _____no_output_____ ###Markdown So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school). ###Code # Linear model with hours == 50 and educ == 12 lm = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*12 + samples['hours']*50))) # Linear model with hours == 50 and educ == 16 lm2 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*16 + samples['hours']*50))) # Linear model with hours == 50 and educ == 19 lm3 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*19 + samples['hours']*50))) ###Output _____no_output_____ ###Markdown Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values. ###Code # Plot the posterior predictive distributions of P(income > $50K) vs. age pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15) import matplotlib.lines as mlines blue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education') green_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors') red_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School') plt.legend(handles=[blue_line, green_line, red_line], loc='lower right') plt.ylabel("P(Income > $50K)") plt.xlabel("Age") plt.show() b = trace['educ'] plt.hist(np.exp(b), bins=20, normed=True) plt.xlabel("Odds Ratio") plt.show() ###Output _____no_output_____ ###Markdown Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval! ###Code lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5) print("P(%.3f < O.R. < %.3f) = 0.95" % (np.exp(lb),np.exp(ub))) ###Output P(1.378 < O.R. < 1.414) = 0.95 ###Markdown Model selection One question that was immediately asked was what effect does age have on the model, and why should it be $age^2$ versus age? We'll run the model with a few changes to see what effect higher order terms have on this model in terms of WAIC. ###Code models_lin, traces_lin = run_models(data, 4) dfwaic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin']) dfwaic.index.name = 'model' for nm in dfwaic.index: dfwaic.loc[nm, 'lin'] = pm.waic(traces_lin[nm],models_lin[nm])[0] dfwaic = pd.melt(dfwaic.reset_index(), id_vars=['model'], var_name='poly', value_name='waic') g = seaborn.factorplot(x='model', y='waic', col='poly', hue='poly', data=dfwaic, kind='bar', size=6) ###Output _____no_output_____ ###Markdown GLM: Logistic Regression* This is a reproduction with a few slight alterations of [Bayesian Log Reg](http://jbencook.github.io/portfolio/bayesian_logistic_regression.html) by J. Benjamin Cook* Author: Peadar Coyle and J. Benjamin Cook* How likely am I to make more than $50,000 US Dollars?* Exploration of model selection techniques too - I use WAIC to select the best model. * The convenience functions are all taken from Jon Sedars work.* This example also has some explorations of the features so serves as a good example of Exploratory Data Analysis and how that can guide the model creation/ model selection process. ###Code %matplotlib inline import pandas as pd import numpy as np import pymc3 as pm import matplotlib.pyplot as plt import seaborn import warnings warnings.filterwarnings('ignore') from collections import OrderedDict from time import time import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.optimize import fmin_powell from scipy import integrate import theano as thno import theano.tensor as T def run_models(df, upper_order=5): ''' Convenience function: Fit a range of pymc3 models of increasing polynomial complexity. Suggest limit to max order 5 since calculation time is exponential. ''' models, traces = OrderedDict(), OrderedDict() for k in range(1,upper_order+1): nm = 'k{}'.format(k) fml = create_poly_modelspec(k) with pm.Model() as models[nm]: print('\nRunning: {}'.format(nm)) pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Binomial()) traces[nm] = pm.sample(1000, tune=1000, init='adapt_diag') return models, traces def plot_traces(traces, retain=0): ''' Convenience function: Plot traces with overlaid means and values ''' ax = pm.traceplot(traces[-retain:], lines=tuple([(k, {}, v['mean']) for k, v in pm.summary(traces[-retain:]).iterrows()])) for i, mn in enumerate(pm.summary(traces[-retain:])['mean']): ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data' ,xytext=(5,10), textcoords='offset points', rotation=90 ,va='bottom', fontsize='large', color='#AA0022') def create_poly_modelspec(k=1): ''' Convenience function: Create a polynomial modelspec string for patsy ''' return ('income ~ educ + hours + age ' + ' '.join(['+ np.power(age,{})'.format(j) for j in range(2,k+1)])).strip() ###Output _____no_output_____ ###Markdown The [Adult Data Set](http://archive.ics.uci.edu/ml/datasets/Adult) is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \\$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.The motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression. ###Code raw_data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", header=None, names=['age', 'workclass', 'fnlwgt', 'education-categorical', 'educ', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'captial-gain', 'capital-loss', 'hours', 'native-country', 'income']) raw_data.head(10) ###Output _____no_output_____ ###Markdown Scrubbing and cleaningWe need to remove any null entries in Income. And we also want to restrict this study to the United States. ###Code data = raw_data[~pd.isnull(raw_data['income'])] data[data['native-country']==" United-States"].sample(5) income = 1 * (data['income'] == " >50K") data = data[['age', 'educ', 'hours']] # Scale age by 10, it helps with model convergence. data['age'] = data['age']/10. data['age2'] = np.square(data['age']) data['income'] = income income.value_counts() ###Output _____no_output_____ ###Markdown Exploring the data Let us get a feel for the parameters. * We see that age is a tailed distribution. Certainly not Gaussian!* We don't see much of a correlation between many of the features, with the exception of Age and Age2. * Hours worked has some interesting behaviour. How would one describe this distribution? ###Code g = seaborn.pairplot(data) # Compute the correlation matrix corr = data.corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = seaborn.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio seaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, linewidths=.5, cbar_kws={"shrink": .5}, ax=ax); ###Output _____no_output_____ ###Markdown We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income (which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering). The modelWe will use a simple model, which assumes that the probability of making more than $50K is a function of age, years of education and hours worked per week. We will use PyMC3 do inference. In Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters(in this case the regression coefficients)The posterior is equal to the likelihood $$p(\theta | D) = \frac{p(D|\theta)p(\theta)}{p(D)}$$Because the denominator is a notoriously difficult integral, $p(D) = \int p(D | \theta) p(\theta) d \theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity. What this means in practice is that we only need to worry about the numerator. Getting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.The likelihood is the product of n Bernoulli trials, $\prod^{n}_{i=1} p_{i}^{y} (1 - p_{i})^{1-y_{i}}$,where $p_i = \frac{1}{1 + e^{-z_i}}$, $z_{i} = \beta_{0} + \beta_{1}(age)_{i} + \beta_2(age)^{2}_{i} + \beta_{3}(educ)_{i} + \beta_{4}(hours)_{i}$ and $y_{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise. With the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters. ###Code with pm.Model() as logistic_model: pm.glm.GLM.from_formula('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial()) trace = pm.sample(1000, tune=1000, init='adapt_diag') plot_traces(trace); ###Output _____no_output_____ ###Markdown Some results One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.I'll use seaborn to look at the distribution of some of these factors. ###Code plt.figure(figsize=(9,7)) seaborn.jointplot(trace['age'], trace['educ'], kind="hex", color="#4CB391") plt.xlabel("beta_age") plt.ylabel("beta_educ"); ###Output _____no_output_____ ###Markdown So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school). ###Code def lm_full(trace, age, educ, hours): shape = np.broadcast(age, educ, hours).shape x_norm = np.asarray([np.broadcast_to(x, shape) for x in [age/10., educ, hours]]) return 1 / (1 + np.exp(-(trace['Intercept'] + trace['age']*x_norm[0] + trace['age2']*(x_norm[0]**2) + trace['educ']*x_norm[1] + trace['hours']*x_norm[2]))) # Linear model with hours == 50 and educ == 12 lm = lambda x, samples: lm_full(samples, x, 12., 50.) # Linear model with hours == 50 and educ == 16 lm2 = lambda x, samples: lm_full(samples, x, 16., 50.) # Linear model with hours == 50 and educ == 19 lm3 = lambda x, samples: lm_full(samples, x, 19., 50.) ###Output _____no_output_____ ###Markdown Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values. ###Code # Plot the posterior predictive distributions of P(income > $50K) vs. age pm.plot_posterior_predictive_glm(trace, eval=np.linspace( 25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace( 25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace( 25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15) import matplotlib.lines as mlines blue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education') green_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors') red_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School') plt.legend(handles=[blue_line, green_line, red_line], loc='lower right') plt.ylabel("P(Income > $50K)") plt.xlabel("Age") plt.show() b = trace['educ'] plt.hist(np.exp(b), bins=20, normed=True) plt.xlabel("Odds Ratio") plt.show() ###Output _____no_output_____ ###Markdown Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval! ###Code lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5) print("P(%.3f < O.R. < %.3f) = 0.95" % (np.exp(lb),np.exp(ub))) ###Output P(1.377 < O.R. < 1.414) = 0.95 ###Markdown Model selection One question that was immediately asked was what effect does age have on the model, and why should it be $age^2$ versus age? We'll run the model with a few changes to see what effect higher order terms have on this model in terms of WAIC. ###Code models_lin, traces_lin = run_models(data, 3) model_trace_dict = dict() for nm in ['k1', 'k2', 'k3']: models_lin[nm].name = nm model_trace_dict.update({models_lin[nm]: traces_lin[nm]}) dfwaic = pm.compare(model_trace_dict, ic='WAIC') pm.compareplot(dfwaic); ###Output _____no_output_____ ###Markdown GLM: Logistic Regression* This is a reproduction with a few slight alterations of [Bayesian Log Reg](http://jbencook.github.io/portfolio/bayesian_logistic_regression.html) by J. Benjamin Cook* Author: Peadar Coyle and J. Benjamin Cook* How likely am I to make more than $50,000 US Dollars?* Exploration of model selection techniques too - I use WAIC to select the best model. * The convenience functions are all taken from Jon Sedars work.* This example also has some explorations of the features so serves as a good example of Exploratory Data Analysis and how that can guide the model creation/ model selection process. ###Code import arviz as az import matplotlib.pyplot as plt import numpy as np import pandas as pd import pymc3 as pm import seaborn import theano as thno import theano.tensor as T import warnings from collections import OrderedDict from scipy import integrate from scipy.optimize import fmin_powell from time import time print('Running on PyMC3 v{}'.format(pm.__version__)) %config InlineBackend.figure_format = 'retina' warnings.filterwarnings('ignore') az.style.use('arviz-darkgrid') def run_models(df, upper_order=5): ''' Convenience function: Fit a range of pymc3 models of increasing polynomial complexity. Suggest limit to max order 5 since calculation time is exponential. ''' models, traces = OrderedDict(), OrderedDict() for k in range(1,upper_order+1): nm = 'k{}'.format(k) fml = create_poly_modelspec(k) with pm.Model() as models[nm]: print('\nRunning: {}'.format(nm)) pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Binomial()) traces[nm] = pm.sample(1000, tune=1000, init='adapt_diag') return models, traces def plot_traces(traces, retain=0): ''' Convenience function: Plot traces with overlaid means and values ''' ax = pm.traceplot(traces[-retain:], lines=tuple([(k, {}, v['mean']) for k, v in pm.summary(traces[-retain:]).iterrows()])) for i, mn in enumerate(pm.summary(traces[-retain:])['mean']): ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data' ,xytext=(5,10), textcoords='offset points', rotation=90 ,va='bottom', fontsize='large', color='#AA0022') def create_poly_modelspec(k=1): ''' Convenience function: Create a polynomial modelspec string for patsy ''' return ('income ~ educ + hours + age ' + ' '.join(['+ np.power(age,{})'.format(j) for j in range(2,k+1)])).strip() ###Output _____no_output_____ ###Markdown The [Adult Data Set](http://archive.ics.uci.edu/ml/datasets/Adult) is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \\$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.The motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression. ###Code raw_data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", header=None, names=['age', 'workclass', 'fnlwgt', 'education-categorical', 'educ', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'captial-gain', 'capital-loss', 'hours', 'native-country', 'income']) raw_data.head(10) ###Output _____no_output_____ ###Markdown Scrubbing and cleaningWe need to remove any null entries in Income. And we also want to restrict this study to the United States. ###Code data = raw_data[~pd.isnull(raw_data['income'])] data[data['native-country']==" United-States"].sample(5) income = 1 * (data['income'] == " >50K") data = data[['age', 'educ', 'hours']] # Scale age by 10, it helps with model convergence. data['age'] = data['age']/10. data['age2'] = np.square(data['age']) data['income'] = income income.value_counts() ###Output _____no_output_____ ###Markdown Exploring the data Let us get a feel for the parameters. * We see that age is a tailed distribution. Certainly not Gaussian!* We don't see much of a correlation between many of the features, with the exception of Age and Age2. * Hours worked has some interesting behaviour. How would one describe this distribution? ###Code g = seaborn.pairplot(data) # Compute the correlation matrix corr = data.corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = seaborn.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio seaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, linewidths=.5, cbar_kws={"shrink": .5}, ax=ax); ###Output _____no_output_____ ###Markdown We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income (which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering). The modelWe will use a simple model, which assumes that the probability of making more than $50K is a function of age, years of education and hours worked per week. We will use PyMC3 do inference. In Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters(in this case the regression coefficients)The posterior is equal to the likelihood $$p(\theta | D) = \frac{p(D|\theta)p(\theta)}{p(D)}$$Because the denominator is a notoriously difficult integral, $p(D) = \int p(D | \theta) p(\theta) d \theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity. What this means in practice is that we only need to worry about the numerator. Getting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.The likelihood is the product of n Bernoulli trials, $\prod^{n}_{i=1} p_{i}^{y} (1 - p_{i})^{1-y_{i}}$,where $p_i = \frac{1}{1 + e^{-z_i}}$, $z_{i} = \beta_{0} + \beta_{1}(age)_{i} + \beta_2(age)^{2}_{i} + \beta_{3}(educ)_{i} + \beta_{4}(hours)_{i}$ and $y_{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise. With the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters. ###Code with pm.Model() as logistic_model: pm.glm.GLM.from_formula('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial()) trace = pm.sample(1000, tune=1000, init='adapt_diag') plot_traces(trace); ###Output _____no_output_____ ###Markdown Some results One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.I'll use seaborn to look at the distribution of some of these factors. ###Code plt.figure(figsize=(9,7)) seaborn.jointplot(trace['age'], trace['educ'], kind="hex", color="#4CB391") plt.xlabel("beta_age") plt.ylabel("beta_educ"); ###Output _____no_output_____ ###Markdown So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school). ###Code def lm_full(trace, age, educ, hours): shape = np.broadcast(age, educ, hours).shape x_norm = np.asarray([np.broadcast_to(x, shape) for x in [age/10., educ, hours]]) return 1 / (1 + np.exp(-(trace['Intercept'] + trace['age']*x_norm[0] + trace['age2']*(x_norm[0]**2) + trace['educ']*x_norm[1] + trace['hours']*x_norm[2]))) # Linear model with hours == 50 and educ == 12 lm = lambda x, samples: lm_full(samples, x, 12., 50.) # Linear model with hours == 50 and educ == 16 lm2 = lambda x, samples: lm_full(samples, x, 16., 50.) # Linear model with hours == 50 and educ == 19 lm3 = lambda x, samples: lm_full(samples, x, 19., 50.) ###Output _____no_output_____ ###Markdown Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values. ###Code # Plot the posterior predictive distributions of P(income > $50K) vs. age pm.plot_posterior_predictive_glm(trace, eval=np.linspace( 25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace( 25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace( 25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15) import matplotlib.lines as mlines blue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education') green_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors') red_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School') plt.legend(handles=[blue_line, green_line, red_line], loc='lower right') plt.ylabel("P(Income > $50K)") plt.xlabel("Age") plt.show() b = trace['educ'] plt.hist(np.exp(b), bins=20, normed=True) plt.xlabel("Odds Ratio") plt.show() ###Output _____no_output_____ ###Markdown Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval! ###Code lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5) print("P(%.3f < O.R. < %.3f) = 0.95" % (np.exp(lb),np.exp(ub))) ###Output P(1.377 < O.R. < 1.414) = 0.95 ###Markdown Model selection One question that was immediately asked was what effect does age have on the model, and why should it be $age^2$ versus age? We'll run the model with a few changes to see what effect higher order terms have on this model in terms of WAIC. ###Code models_lin, traces_lin = run_models(data, 3) model_trace_dict = dict() for nm in ['k1', 'k2', 'k3']: models_lin[nm].name = nm model_trace_dict.update({models_lin[nm]: traces_lin[nm]}) dfwaic = pm.compare(model_trace_dict, ic='WAIC') pm.compareplot(dfwaic); ###Output _____no_output_____ ###Markdown WAIC confirms our decision to use age^2. ###Code %load_ext watermark %watermark -n -u -v -iv -w ###Output pymc3 3.8 arviz 0.8.3 numpy 1.17.5 last updated: Thu Jun 11 2020 CPython 3.8.2 IPython 7.11.0 watermark 2.0.2 ###Markdown GLM: Logistic Regression* This is a reproduction with a few slight alterations of [Bayesian Log Reg](http://jbencook.github.io/portfolio/bayesian_logistic_regression.html) by J. Benjamin Cook* Author: Peadar Coyle and J. Benjamin Cook* How likely am I to make more than $50,000 US Dollars?* Exploration of model selection techniques too - I use DIC and WAIC to select the best model. * The convenience functions are all taken from Jon Sedars work.* This example also has some explorations of the features so serves as a good example of Exploratory Data Analysis and how that can guide the model creation/ model selection process. ###Code %matplotlib inline import pandas as pd import numpy as np import pymc3 as pm import matplotlib.pyplot as plt import seaborn import warnings warnings.filterwarnings('ignore') from collections import OrderedDict from time import time import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.optimize import fmin_powell from scipy import integrate import theano as thno import theano.tensor as T def run_models(df, upper_order=5): ''' Convenience function: Fit a range of pymc3 models of increasing polynomial complexity. Suggest limit to max order 5 since calculation time is exponential. ''' models, traces = OrderedDict(), OrderedDict() for k in range(1,upper_order+1): nm = 'k{}'.format(k) fml = create_poly_modelspec(k) with pm.Model() as models[nm]: print('\nRunning: {}'.format(nm)) pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Normal()) traces[nm] = pm.sample(2000, init=None) return models, traces def plot_traces(traces, retain=1000): ''' Convenience function: Plot traces with overlaid means and values ''' ax = pm.traceplot(traces[-retain:], figsize=(12,len(traces.varnames)*1.5), lines={k: v['mean'] for k, v in pm.df_summary(traces[-retain:]).iterrows()}) for i, mn in enumerate(pm.df_summary(traces[-retain:])['mean']): ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data' ,xytext=(5,10), textcoords='offset points', rotation=90 ,va='bottom', fontsize='large', color='#AA0022') def create_poly_modelspec(k=1): ''' Convenience function: Create a polynomial modelspec string for patsy ''' return ('income ~ educ + hours + age ' + ' '.join(['+ np.power(age,{})'.format(j) for j in range(2,k+1)])).strip() ###Output _____no_output_____ ###Markdown The [Adult Data Set](http://archive.ics.uci.edu/ml/datasets/Adult) is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \\$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.The motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression. ###Code data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", header=None, names=['age', 'workclass', 'fnlwgt', 'education-categorical', 'educ', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'captial-gain', 'capital-loss', 'hours', 'native-country', 'income']) data.head(10) ###Output _____no_output_____ ###Markdown Scrubbing and cleaningWe need to remove any null entries in Income. And we also want to restrict this study to the United States. ###Code data = data[~pd.isnull(data['income'])] data[data['native-country']==" United-States"] income = 1 * (data['income'] == " >50K") age2 = np.square(data['age']) data = data[['age', 'educ', 'hours']] data['age2'] = age2 data['income'] = income income.value_counts() ###Output _____no_output_____ ###Markdown Exploring the data Let us get a feel for the parameters. * We see that age is a tailed distribution. Certainly not Gaussian!* We don't see much of a correlation between many of the features, with the exception of Age and Age2. * Hours worked has some interesting behaviour. How would one describe this distribution? ###Code g = seaborn.pairplot(data) # Compute the correlation matrix corr = data.corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = seaborn.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio seaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, linewidths=.5, cbar_kws={"shrink": .5}, ax=ax) ###Output _____no_output_____ ###Markdown We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income (which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering). The modelWe will use a simple model, which assumes that the probability of making more than $50K is a function of age, years of education and hours worked per week. We will use PyMC3 do inference. In Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters(in this case the regression coefficients)The posterior is equal to the likelihood $$p(\theta | D) = \frac{p(D|\theta)p(\theta)}{p(D)}$$Because the denominator is a notoriously difficult integral, $p(D) = \int p(D | \theta) p(\theta) d \theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity. What this means in practice is that we only need to worry about the numerator. Getting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.The likelihood is the product of n Bernoulli trials, $\prod^{n}_{i=1} p_{i}^{y} (1 - p_{i})^{1-y_{i}}$,where $p_i = \frac{1}{1 + e^{-z_i}}$, $z_{i} = \beta_{0} + \beta_{1}(age)_{i} + \beta_2(age)^{2}_{i} + \beta_{3}(educ)_{i} + \beta_{4}(hours)_{i}$ and $y_{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise. With the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters. ###Code with pm.Model() as logistic_model: pm.glm.GLM.from_formula('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial()) trace_logistic_model = pm.sample(4000) plot_traces(trace_logistic_model, retain=1000) ###Output _____no_output_____ ###Markdown Some results One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.I'll use seaborn to look at the distribution of some of these factors. ###Code plt.figure(figsize=(9,7)) trace = trace_logistic_model[1000:] seaborn.jointplot(trace['age'], trace['educ'], kind="hex", color="#4CB391") plt.xlabel("beta_age") plt.ylabel("beta_educ") plt.show() ###Output _____no_output_____ ###Markdown So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school). ###Code # Linear model with hours == 50 and educ == 12 lm = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*12 + samples['hours']*50))) # Linear model with hours == 50 and educ == 16 lm2 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*16 + samples['hours']*50))) # Linear model with hours == 50 and educ == 19 lm3 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*19 + samples['hours']*50))) ###Output _____no_output_____ ###Markdown Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values. ###Code # Plot the posterior predictive distributions of P(income > $50K) vs. age pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15) import matplotlib.lines as mlines blue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education') green_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors') red_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School') plt.legend(handles=[blue_line, green_line, red_line], loc='lower right') plt.ylabel("P(Income > $50K)") plt.xlabel("Age") plt.show() b = trace['educ'] plt.hist(np.exp(b), bins=20, normed=True) plt.xlabel("Odds Ratio") plt.show() ###Output _____no_output_____ ###Markdown Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval! ###Code lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5) print("P(%.3f < O.R. < %.3f) = 0.95"%(np.exp(3*lb),np.exp(3*ub))) ###Output P(2.612 < O.R. < 2.829) = 0.95 ###Markdown Model selection The [Deviance Information Criterion (DIC)](https://en.wikipedia.org/wiki/Deviance_information_criterion) is a fairly unsophisticated method for comparing the deviance of likelhood across the the sample traces of a model run. However, this simplicity apparently yields quite good results in a variety of cases. We'll run the model with a few changes to see what effect higher order terms have on this model.One question that was immediately asked was what effect does age have on the model, and why should it be age^2 versus age? We'll use the DIC to answer this question. ###Code models_lin, traces_lin = run_models(data, 4) dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin']) dfdic.index.name = 'model' for nm in dfdic.index: dfdic.loc[nm, 'lin'] = pm.stats.dic(traces_lin[nm], models_lin[nm]) dfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='dic') g = seaborn.factorplot(x='model', y='dic', col='poly', hue='poly', data=dfdic, kind='bar', size=6) ###Output _____no_output_____ ###Markdown There isn't a lot of difference between these models in terms of DIC. So our choice is fine in the model above, and there isn't much to be gained for going up to age^3 for example.Next we look at [WAIC](http://watanabe-www.math.dis.titech.ac.jp/users/swatanab/dicwaic.html). Which is another model selection technique. ###Code dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin']) dfdic.index.name = 'model' for nm in dfdic.index: dfdic.loc[nm, 'lin'] = pm.stats.waic(traces_lin[nm],models_lin[nm])[0] dfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='waic') g = seaborn.factorplot(x='model', y='waic', col='poly', hue='poly', data=dfdic, kind='bar', size=6) ###Output _____no_output_____ ###Markdown GLM: Logistic Regression* This is a reproduction with a few slight alterations of [Bayesian Log Reg](http://jbencook.github.io/portfolio/bayesian_logistic_regression.html) by J. Benjamin Cook* Author: Peadar Coyle and J. Benjamin Cook* How likely am I to make more than $50,000 US Dollars?* Exploration of model selection techniques too - I use DIC and WAIC to select the best model. * The convenience functions are all taken from Jon Sedars work.* This example also has some explorations of the features so serves as a good example of Exploratory Data Analysis and how that can guide the model creation/ model selection process. ###Code %matplotlib inline import pandas as pd import numpy as np import pymc3 as pm import matplotlib.pyplot as plt import seaborn import warnings warnings.filterwarnings('ignore') from collections import OrderedDict from time import time import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.optimize import fmin_powell from scipy import integrate import theano as thno import theano.tensor as T def run_models(df, upper_order=5): ''' Convenience function: Fit a range of pymc3 models of increasing polynomial complexity. Suggest limit to max order 5 since calculation time is exponential. ''' models, traces = OrderedDict(), OrderedDict() for k in range(1,upper_order+1): nm = 'k{}'.format(k) fml = create_poly_modelspec(k) with pm.Model() as models[nm]: print('\nRunning: {}'.format(nm)) pm.glm.glm(fml, df, family=pm.glm.families.Normal()) start_MAP = pm.find_MAP(fmin=fmin_powell, disp=False) traces[nm] = pm.sample(2000, start=start_MAP, step=pm.NUTS(), progressbar=True) return models, traces def plot_traces(traces, retain=1000): ''' Convenience function: Plot traces with overlaid means and values ''' ax = pm.traceplot(traces[-retain:], figsize=(12,len(traces.varnames)*1.5), lines={k: v['mean'] for k, v in pm.df_summary(traces[-retain:]).iterrows()}) for i, mn in enumerate(pm.df_summary(traces[-retain:])['mean']): ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data' ,xytext=(5,10), textcoords='offset points', rotation=90 ,va='bottom', fontsize='large', color='#AA0022') def create_poly_modelspec(k=1): ''' Convenience function: Create a polynomial modelspec string for patsy ''' return ('income ~ educ + hours + age ' + ' '.join(['+ np.power(age,{})'.format(j) for j in range(2,k+1)])).strip() ###Output _____no_output_____ ###Markdown The [Adult Data Set](http://archive.ics.uci.edu/ml/datasets/Adult) is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \\$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.The motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression. ###Code data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", header=None, names=['age', 'workclass', 'fnlwgt', 'education-categorical', 'educ', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'captial-gain', 'capital-loss', 'hours', 'native-country', 'income']) data ###Output _____no_output_____ ###Markdown Scrubbing and cleaningWe need to remove any null entries in Income. And we also want to restrict this study to the United States. ###Code data = data[~pd.isnull(data['income'])] data[data['native-country']==" United-States"] income = 1 * (data['income'] == " >50K") age2 = np.square(data['age']) data = data[['age', 'educ', 'hours']] data['age2'] = age2 data['income'] = income income.value_counts() ###Output _____no_output_____ ###Markdown Exploring the data Let us get a feel for the parameters. * We see that age is a tailed distribution. Certainly not Gaussian!* We don't see much of a correlation between many of the features, with the exception of Age and Age2. * Hours worked has some interesting behaviour. How would one describe this distribution? ###Code g = seaborn.pairplot(data) # Compute the correlation matrix corr = data.corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = seaborn.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio seaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, linewidths=.5, cbar_kws={"shrink": .5}, ax=ax) ###Output _____no_output_____ ###Markdown We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income (which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering). The modelWe will use a simple model, which assumes that the probability of making more than $50K is a function of age, years of education and hours worked per week. We will use PyMC3 do inference. In Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters(in this case the regression coefficients)The posterior is equal to the likelihood $$p(\theta | D) = \frac{p(D|\theta)p(\theta)}{p(D)}$$Because the denominator is a notoriously difficult integral, $p(D) = \int p(D | \theta) p(\theta) d \theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity. What this means in practice is that we only need to worry about the numerator. Getting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.The likelihood is the product of n Bernoulli trials, $\prod^{n}_{i=1} p_{i}^{y} (1 - p_{i})^{1-y_{i}}$,where $p_i = \frac{1}{1 + e^{-z_i}}$, $z_{i} = \beta_{0} + \beta_{1}(age)_{i} + \beta_2(age)^{2}_{i} + \beta_{3}(educ)_{i} + \beta_{4}(hours)_{i}$ and $y_{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise. With the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters. ###Code with pm.Model() as logistic_model: pm.glm.glm('income ~ age + age2 + educ', data, family=pm.glm.families.Binomial()) trace_logistic_model = pm.sample(2000, progressbar=True) plot_traces(trace_logistic_model, retain=1000) ###Output _____no_output_____ ###Markdown Some results One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.I'll use seaborn to look at the distribution of some of these factors. ###Code plt.figure(figsize=(9,7)) trace = trace_logistic_model[1000:] seaborn.jointplot(trace['age'], trace['educ'], kind="hex", color="#4CB391") plt.xlabel("beta_age") plt.ylabel("beta_educ") plt.show() ###Output _____no_output_____ ###Markdown So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school). ###Code # Linear model with hours == 50 and educ == 12 lm = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*12 + samples['hours']*50))) # Linear model with hours == 50 and educ == 16 lm2 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*16 + samples['hours']*50))) # Linear model with hours == 50 and educ == 19 lm3 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*19 + samples['hours']*50))) ###Output _____no_output_____ ###Markdown Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values. ###Code # Plot the posterior predictive distributions of P(income > $50K) vs. age pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15) pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15) pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15) import matplotlib.lines as mlines blue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education') green_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors') red_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School') plt.legend(handles=[blue_line, green_line, red_line], loc='lower right') plt.ylabel("P(Income > $50K)") plt.xlabel("Age") plt.show() b = trace['educ'] plt.hist(np.exp(b), bins=20, normed=True) plt.xlabel("Odds Ratio") plt.show() ###Output _____no_output_____ ###Markdown Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval! ###Code lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5) print("P(%.3f < O.R. < %.3f) = 0.95"%(np.exp(3*lb),np.exp(3*ub))) ###Output P(1.000 < O.R. < 1.000) = 0.95 ###Markdown Model selection The [Deviance Information Criterion (DIC)](https://en.wikipedia.org/wiki/Deviance_information_criterion) is a fairly unsophisticated method for comparing the deviance of likelhood across the the sample traces of a model run. However, this simplicity apparently yields quite good results in a variety of cases. We'll run the model with a few changes to see what effect higher order terms have on this model.One question that was immediately asked was what effect does age have on the model, and why should it be age^2 versus age? We'll use the DIC to answer this question. ###Code models_lin, traces_lin = run_models(data, 4) dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin']) dfdic.index.name = 'model' for nm in dfdic.index: dfdic.loc[nm, 'lin'] = pm.stats.dic(traces_lin[nm],models_lin[nm]) dfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='dic') g = seaborn.factorplot(x='model', y='dic', col='poly', hue='poly', data=dfdic, kind='bar', size=6) ###Output _____no_output_____ ###Markdown There isn't a lot of difference between these models in terms of DIC. So our choice is fine in the model above, and there isn't much to be gained for going up to age^3 for example.Next we look at [WAIC](http://watanabe-www.math.dis.titech.ac.jp/users/swatanab/dicwaic.html). Which is another model selection technique. ###Code dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin']) dfdic.index.name = 'model' for nm in dfdic.index: dfdic.loc[nm, 'lin'] = pm.stats.waic(traces_lin[nm],models_lin[nm]) dfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='waic') g = seaborn.factorplot(x='model', y='waic', col='poly', hue='poly', data=dfdic, kind='bar', size=6) ###Output _____no_output_____ ###Markdown GLM: Logistic Regression* This is a reproduction with a few slight alterations of [Bayesian Log Reg](http://jbencook.github.io/portfolio/bayesian_logistic_regression.html) by J. Benjamin Cook* Author: Peadar Coyle and J. Benjamin Cook* How likely am I to make more than $50,000 US Dollars?* Exploration of model selection techniques too - I use WAIC to select the best model. * The convenience functions are all taken from Jon Sedars work.* This example also has some explorations of the features so serves as a good example of Exploratory Data Analysis and how that can guide the model creation/ model selection process. ###Code %matplotlib inline import pandas as pd import numpy as np import pymc3 as pm import matplotlib.pyplot as plt import seaborn import warnings warnings.filterwarnings('ignore') from collections import OrderedDict from time import time import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.optimize import fmin_powell from scipy import integrate import theano as thno import theano.tensor as T def run_models(df, upper_order=5): ''' Convenience function: Fit a range of pymc3 models of increasing polynomial complexity. Suggest limit to max order 5 since calculation time is exponential. ''' models, traces = OrderedDict(), OrderedDict() for k in range(1,upper_order+1): nm = 'k{}'.format(k) fml = create_poly_modelspec(k) with pm.Model() as models[nm]: print('\nRunning: {}'.format(nm)) pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Normal()) traces[nm] = pm.sample(2000, chains=1, init=None, tune=1000) return models, traces def plot_traces(traces, retain=1000): ''' Convenience function: Plot traces with overlaid means and values ''' ax = pm.traceplot(traces[-retain:], figsize=(12,len(traces.varnames)*1.5), lines={k: v['mean'] for k, v in pm.df_summary(traces[-retain:]).iterrows()}) for i, mn in enumerate(pm.df_summary(traces[-retain:])['mean']): ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data' ,xytext=(5,10), textcoords='offset points', rotation=90 ,va='bottom', fontsize='large', color='#AA0022') def create_poly_modelspec(k=1): ''' Convenience function: Create a polynomial modelspec string for patsy ''' return ('income ~ educ + hours + age ' + ' '.join(['+ np.power(age,{})'.format(j) for j in range(2,k+1)])).strip() ###Output _____no_output_____ ###Markdown The [Adult Data Set](http://archive.ics.uci.edu/ml/datasets/Adult) is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \\$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.The motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression. ###Code data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", header=None, names=['age', 'workclass', 'fnlwgt', 'education-categorical', 'educ', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'captial-gain', 'capital-loss', 'hours', 'native-country', 'income']) data.head(10) ###Output _____no_output_____ ###Markdown Scrubbing and cleaningWe need to remove any null entries in Income. And we also want to restrict this study to the United States. ###Code data = data[~pd.isnull(data['income'])] data[data['native-country']==" United-States"] income = 1 * (data['income'] == " >50K") age2 = np.square(data['age']) data = data[['age', 'educ', 'hours']] data['age2'] = age2 data['income'] = income income.value_counts() ###Output _____no_output_____ ###Markdown Exploring the data Let us get a feel for the parameters. * We see that age is a tailed distribution. Certainly not Gaussian!* We don't see much of a correlation between many of the features, with the exception of Age and Age2. * Hours worked has some interesting behaviour. How would one describe this distribution? ###Code g = seaborn.pairplot(data) # Compute the correlation matrix corr = data.corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = seaborn.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio seaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, linewidths=.5, cbar_kws={"shrink": .5}, ax=ax) ###Output _____no_output_____ ###Markdown We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income (which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering). The modelWe will use a simple model, which assumes that the probability of making more than $50K is a function of age, years of education and hours worked per week. We will use PyMC3 do inference. In Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters(in this case the regression coefficients)The posterior is equal to the likelihood $$p(\theta | D) = \frac{p(D|\theta)p(\theta)}{p(D)}$$Because the denominator is a notoriously difficult integral, $p(D) = \int p(D | \theta) p(\theta) d \theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity. What this means in practice is that we only need to worry about the numerator. Getting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.The likelihood is the product of n Bernoulli trials, $\prod^{n}_{i=1} p_{i}^{y} (1 - p_{i})^{1-y_{i}}$,where $p_i = \frac{1}{1 + e^{-z_i}}$, $z_{i} = \beta_{0} + \beta_{1}(age)_{i} + \beta_2(age)^{2}_{i} + \beta_{3}(educ)_{i} + \beta_{4}(hours)_{i}$ and $y_{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise. With the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters. ###Code with pm.Model() as logistic_model: pm.glm.GLM.from_formula('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial()) trace_logistic_model = pm.sample(2000, chains=1, tune=1000) plot_traces(trace_logistic_model, retain=1000) ###Output _____no_output_____ ###Markdown Some results One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.I'll use seaborn to look at the distribution of some of these factors. ###Code plt.figure(figsize=(9,7)) trace = trace_logistic_model[1000:] seaborn.jointplot(trace['age'], trace['educ'], kind="hex", color="#4CB391") plt.xlabel("beta_age") plt.ylabel("beta_educ") plt.show() ###Output _____no_output_____ ###Markdown So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school). ###Code # Linear model with hours == 50 and educ == 12 lm = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*12 + samples['hours']*50))) # Linear model with hours == 50 and educ == 16 lm2 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*16 + samples['hours']*50))) # Linear model with hours == 50 and educ == 19 lm3 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + samples['age']*x + samples['age2']*np.square(x) + samples['educ']*19 + samples['hours']*50))) ###Output _____no_output_____ ###Markdown Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values. ###Code # Plot the posterior predictive distributions of P(income > $50K) vs. age pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15) pm.plot_posterior_predictive_glm(trace, eval=np.linspace(25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15) import matplotlib.lines as mlines blue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education') green_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors') red_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School') plt.legend(handles=[blue_line, green_line, red_line], loc='lower right') plt.ylabel("P(Income > $50K)") plt.xlabel("Age") plt.show() b = trace['educ'] plt.hist(np.exp(b), bins=20, normed=True) plt.xlabel("Odds Ratio") plt.show() ###Output _____no_output_____ ###Markdown Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval! ###Code lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5) print("P(%.3f < O.R. < %.3f) = 0.95"%(np.exp(3*lb),np.exp(3*ub))) ###Output P(2.617 < O.R. < 2.824) = 0.95 ###Markdown Model selection One question that was immediately asked was what effect does age have on the model, and why should it be $age^2$ versus age? We'll run the model with a few changes to see what effect higher order terms have on this model in terms of WAIC. ###Code models_lin, traces_lin = run_models(data, 4) dfwaic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin']) dfwaic.index.name = 'model' for nm in dfwaic.index: dfwaic.loc[nm, 'lin'] = pm.waic(traces_lin[nm],models_lin[nm])[0] dfwaic = pd.melt(dfwaic.reset_index(), id_vars=['model'], var_name='poly', value_name='waic') g = seaborn.factorplot(x='model', y='waic', col='poly', hue='poly', data=dfwaic, kind='bar', size=6) ###Output _____no_output_____
samples/04_gis_analysts_data_scientists/chennai_floods_analysis.ipynb
###Markdown Chennai Floods 2015–A Geographic AnalysisOn December 1–2, 2015, the Indian city of Chennai received more rainfall in 24 hours than it had seen on any day since 1901. The deluge followed a month of persistent monsoon rains that were already well above normal for the Indian state of Tamil Nadu. At least 250 people had died, several hundred had been critically injured, and thousands had been affected or displaced by the flooding that has ensued. Table of ContentsChennai Floods 2015–A Geographic AnalysisSummary of this sampleChennai Floods ExplainedHow much rain and where?Spatial AnalysisWhat caused the flooding in Chennai?A wrong call that sank ChennaiFlood Relief CampsRouting Emergency Supplies to Relief Camps The image above provides satellite-based estimates of rainfall over southeastern India on December 1–2, accumulating in 30–minute intervals. The rainfall data is acquired from the Integrated Multi-Satellite Retrievals for GPM (IMERG), a product of the [Global Precipitation Measurement](http://www.nasa.gov/mission_pages/GPM/main/index.html) mission. The brightest shades on the maps represent rainfall totals approaching 400 millimeters (16 inches) during the 48-hour period. These regional, remotely-sensed estimates may differ from the totals measured by ground-based weather stations. According to Hal Pierce, a scientist on the GPM team at NASA’s Goddard Space Flight Center, the highest rainfall totals exceeded 500 mm (20 inches) in an area just off the southeastern coast.[Source: NASA http://earthobservatory.nasa.gov/IOTD/view.php?id=87131] Summary of this sampleThis sample showcases not just the analysis and visualization capabilities of your GIS, but also the ability to store illustrative text, graphics and live code in a Jupyter notebook.The sample starts off reporting the devastating effects of the flood. We plot the locations of rainfall guages and **interpolate** the data to create a continuous surface representing the amount of rainfall throughout the state.Next we plot the locations of major lakes and **trace downstream** the path floods waters would take. We create a **buffer** around this path to demark at risk areas.In the second part of the sample, we take a look at **time series** satellite imagery and observe the human impacts on natural reservoirs over a period of two decades.We then vizualize the locations of relief camps and analyze their capacity using **pandas** and **matplotlib**. We **aggregate** the camps district wise to understand which ones have the largest number of refugees.In the last part, we perform a **routing** analysis to figure out the best path to route emergency supplies from storage to the relief campsFirst, let's import all the necessary libraries and connect to our GIS via an existing profile or creating a new connection by e.g. `gis = GIS("https://www.arcgis.com", "arcgis_python", "P@ssword123")`. ###Code import datetime %matplotlib inline import matplotlib.pyplot as pd from IPython.display import display, YouTubeVideo import arcgis from arcgis.gis import GIS from arcgis.features.analyze_patterns import interpolate_points from arcgis.geocoding import geocode from arcgis.features.find_locations import trace_downstream from arcgis.features.use_proximity import create_buffers gis = GIS(profile = "your_online_profile") ###Output _____no_output_____ ###Markdown Chennai Floods Explained ###Code YouTubeVideo('x4dNIfx6HVs') ###Output _____no_output_____ ###Markdown The catastrophic flooding in Chennai is the result of the heaviest rain in several decades, which forced authorities to release a massive 30,000 cusecs from the Chembarambakkam reservoir into the Adyar river over two days, causing it to flood its banks and submerge neighbourhoods on both sides. It did not help that the Adyar’s stream is not very deep or wide, and its banks have been heavily encroached upon over the years.Similar flooding triggers were in action at Poondi and Puzhal reservoirs, and the Cooum river that winds its way through the city.While Chief Minister J Jayalalithaa said, during the earlier phase of heavy rain last month, that damage during the monsoon was “inevitable”, the fact remains that the mindless development of Chennai over the last two decades — the filling up of lowlands and choking of stormwater drains and other exits for water — has played a major part in the escalation of the crisis.[Source: Indian Express http://indianexpress.com/article/explained/why-is-chennai-under-water/sthash.LlhnqM4B.dpuf] How much rain and where? To get started with our analysis, we bring in a map of the affected region. The map is a live widget that is internally using the ArcGIS JavaScript API. ###Code map = gis.map("Chennai") map ###Output _____no_output_____ ###Markdown We can search for content in our GIS and add layers to our map that can be used for visualization or analysis: ###Code chennaipop = gis.content.search("Chennai_Population", item_type="Feature Layer", outside_org=True)[0] chennaipop ###Output _____no_output_____ ###Markdown Assign an optional JSON paramter to specify its opacity, e.g. `map.add_layer(chennaipop, {"opacity":0.7})` or else just add the layer with no transparency. ###Code map.add_layer(chennaipop, {"renderer":"ClassedColorRenderer", "field_name": "TOTPOP_CY", "opacity":0.7}) ###Output _____no_output_____ ###Markdown To get a sense of how much it rained and where, let's use rainfall data for December 2nd 2015, obtained from the Regional Meteorological Center in Chennai. Tabular data is hard to visualize, so let's bring in a map from our GIS to visualize the data: ###Code search_rainfall = gis.content.search("Chennai_precipitation", item_type="Feature Layer", outside_org=True) if len(search_rainfall) >= 1: rainfall = search_rainfall[0] else: # if the "Chennai_precipitation" web layer does not exist print("Web Layer does not exist. Re-publishing...") # import any pandas data frame, with an address field, as a layer in our GIS import pandas as pds df = pds.read_csv('data/Chennai_precipitation.csv') # Create an arcgis.features.FeatureCollection object by importing the pandas dataframe with an address field rainfall = gis.content.import_data(df, {"Address" : "LOCATION"}) map2 = gis.map("Tamil Nadu, India") map2 ###Output _____no_output_____ ###Markdown We then add this layer to our map to see the locations of the weather stations from which the rainfall data was collected: ###Code map2.add_layer(rainfall, {"renderer":"ClassedSizeRenderer", "field_name":"RAINFALL" }) ###Output _____no_output_____ ###Markdown Here we used the **smart mapping** capability of the GIS to automatically render the data with proportional symbols. Spatial AnalysisRainfall is a continuous phenonmenon that affects the whole region, not just the locations of the weather stations. Based on the observed rainfall at the monitoring stations and their locations, we can interpolate and deduce the approximate rainfall across the whole region. We use the **Interpolate Points** tool from the GIS's spatial analysis service for this.The Interpolate Points tool uses empirical Bayesian kriging to perform the interpolation. ###Code interpolated_rf = interpolate_points(rainfall, field='RAINFALL') ###Output _____no_output_____ ###Markdown Let us create another map of Tamil Nadu state and render the output from Interpolate Points tool ###Code intmap = gis.map("Tamil Nadu") intmap intmap.add_layer(interpolated_rf['result_layer']) ###Output _____no_output_____ ###Markdown We see that rainfall was most severe in and around Chennai as well some parts of central Tamil Nadu. What caused the flooding in Chennai? A wrong call that sank ChennaiMuch of the flooding and subsequent waterlogging was a consequence of the outflows from major reservoirs into swollen rivers and into the city following heavy rains. The release of waters from the Chembarambakkam reservoir in particular has received much attention. [Source: The Hindu, http://www.thehindu.com/news/cities/chennai/chennai-floods-a-wrong-call-that-sank-the-city/article7967371.ece] ###Code lakemap = gis.map("Chennai") lakemap.height='450px' lakemap ###Output _____no_output_____ ###Markdown Let's have look at the major lakes and water reservoirs that were filled to the brim in Chennai due the rains. We plot the locations of some of the reservoirs that had a large outflow during the rains:To plot the locations, we use geocoding tools from the `tools` module. Your GIS can have more than 1 geocoding service, for simplicity, the sample below chooses the first available geocoder to perform an address search ###Code lakemap.draw(geocode("Chembarambakkam, Tamil Nadu")[0], {"title": "Chembarambakkam", "content": "Water reservoir"}) lakemap.draw(geocode("Puzhal Lake, Tamil Nadu")[0], {"title": "Puzhal", "content": "Water reservoir"}) lakemap.draw(geocode("Kannampettai, Tamil Nadu")[0], {"title": "Poondi Lake ", "content": "Water reservoir"}) ###Output _____no_output_____ ###Markdown To identify the flood prone areas, let's trace the path that the water would take when released from the lakes. To do this, we first bring in a layer of lakes in Chennai: ###Code search_results = gis.content.search("Chennai_lakes", item_type="Feature Layer", outside_org=True) search_results chennai_lakes = search_results[2] chennai_lakes ###Output _____no_output_____ ###Markdown Now, let's call the **`Trace Downstream`** analysis tool from the GIS: ###Code downstream = trace_downstream(chennai_lakes) downstream.query() ###Output _____no_output_____ ###Markdown The areas surrounding the trace paths are most prone to flooding and waterlogging. To identify the areas that were at risk, we buffer the traced flow paths by one mile in each direction and visualize it on the map. We see that large areas of the city of Chennai were susceptible to flooding and waterlogging. ###Code floodprone_buffer = create_buffers(downstream, [ 1 ], units='Miles') lakemap.add_layer(floodprone_buffer) ###Output _____no_output_____ ###Markdown Nature's fury or human made disaster?"It is easy to attribute the devastation from unexpected flooding to the results of nature and climate change when in fact it is a result of poor planning and infrastructure. In Chennai, as in several cities across the country, we are experiencing the wanton destruction of our natural buffer zones—rivers, creeks, estuaries, marshlands, lakes—in the name of urban renewal and environmental conservation.The recent floods in Chennai are a fallout of real estate riding roughshod over the city’s waterbodies. Facilitated by an administration that tweaked and modified building rules and urban plans, the real estate boom has consumed the city’s lakes, ponds, tanks and large marshlands.The Ennore creek that used to be home to sprawling mangroves is fast disappearing with soil dredged from the sea being dumped there. The Kodungaiyur dump site in the Madhavaram–Manali wetlands is one of two municipal landfills that service the city. Velachery and Pallikaranai marshlands are a part of the Kovalam basin that was the southern-most of the four river basins for the city. Today, the slightest rains cause flooding and water stagnation in Velachery, home to the city’s largest mall, several other commercial and residential buildings, and also the site where low income communities were allocated land.The Pallikaranai marshlands, once a site for beautiful migratory birds, are now home to the second of the two landfills in the city where the garbage is rapidly leeching into the water and killing the delicate ecosystem."[Source: Chennai's Rain Check http://www.epw.in/commentary/chennais-rain-check.html]There are several marshlands and mangroves in the Chennai region that act as natural buffer zones to collect rain water. Let's see the human impact on Pallikaranai marshland over the last decade by comparing satellite images. ###Code def exact_search(my_gis, title, owner_value, item_type_value, max_items_value=20): final_match = None search_result = my_gis.content.search(query= title + ' AND owner:' + owner_value, item_type=item_type_value, max_items=max_items_value, outside_org=True) if "Imagery Layer" in item_type_value: item_type_value = item_type_value.replace("Imagery Layer", "Image Service") elif "Layer" in item_type_value: item_type_value = item_type_value.replace("Layer", "Service") for result in search_result: if result.title == title: final_match = result break return final_match ls_water = exact_search(gis, 'Landsat GLS Multispectral', 'esri', 'Imagery Layer') ls_water ###Output _____no_output_____ ###Markdown Lets us see how the Pallikaranai marshland has changed over the past few decades, and how this has also contributed to the flooding. We create two maps and load the Land / Water Boundary layer to visualize this. This image layer is time enabled, and the map widget gives you the ability to navigate this dataset via time as well. ###Code ls_water_lyr = ls_water.layers[0] from arcgis.geocoding import geocode area = geocode("Tamil Nadu, India", out_sr=ls_water_lyr.properties.extent.spatialReference)[0] ls_water_lyr.extent = area['extent'] ###Output _____no_output_____ ###Markdown In the cell below, we will use a band combination [5,4,3] (a.k.a. mid-IR (Band 5), near-IR (Band 4) and red (Band 3)) of Landsat to provide definition of land-water boundaries and highlights subtle details not readily apparent in the visible bands alone. The reason that we use more infrared bands is to locate inland lakes and streams with greater precision. Generally, the wetter the soil, the darker it appears, because of the infrared absorption capabilities of water. ###Code # data source option from arcgis.raster.functions import stretch, extract_band target_img_layer = stretch(extract_band(ls_water_lyr, [5,4,3]), stretch_type="percentclip", gamma=[1,1,1], dra=True) ###Output _____no_output_____ ###Markdown Use the cell below to filter imageries based on the temporal conditions, and export the filtered results as local images, then show comparatively with other time range. You can either use the where clause e.g. `where="(Year = " + str(start_year) + ")",` or use the temporal filter as shown below. ###Code import pandas as pd from arcgis import geometry import datetime as dt def filter_images(my_map, start_year, end_year): selected = target_img_layer.filter_by(where="(Category = 1) AND (CloudCover <=0.2)", time=[dt.datetime(start_year, 1, 1), dt.datetime(end_year, 1, 1)], geometry=arcgis.geometry.filters.intersects(ls_water_lyr.extent)) my_map.add_layer(selected) fs = selected.query(out_fields="AcquisitionDate, GroupName, Month, DayOfYear, WRS_Row, WRS_Path") tdf = fs.sdf return tdf ###Output _____no_output_____ ###Markdown First, search for qualified satellite imageries (tiles) intersecting with the area of interest at year 1991. ###Code satmap1 = gis.map("Pallikaranai, Tamil Nadu, India", 13) df = filter_images(satmap1, 1991, 1992) df.head() ###Output _____no_output_____ ###Markdown Then search for satellite imageries intersecting with the area of interest at 2009. ###Code satmap2 = gis.map("Pallikaranai, Tamil Nadu, India", 13) df = filter_images(satmap2, 2009, 2010) df.head() from ipywidgets import * satmap1.layout=Layout(flex='1 1', padding='10px', height='300px') satmap2.layout=Layout(flex='1 1', padding='10px', height='300px') box = HBox([satmap1, satmap2]) box ###Output _____no_output_____ ###Markdown The human impact on the marshland is all too apparent in the satellite images. The marshland has shrunk to less than a third of its size in just two decades."Not long ago, it was a 50-square-kilometre water sprawl in the southern suburbs of Chennai. Now, it is 4.3 square kilometres – less than a tenth of its original. The growing finger of a garbage dump sticks out like a cancerous tumour in the northern part of the marshland. Two major roads cut through the waterbody with few pitifully small culverts that are not up to the job of transferring the rain water flows from such a large catchment. The edges have been eaten into by institutes like the National Institute of Ocean Technology. Ironically, NIOT is an accredited consultant to prepare Environmental Impact Assessments on various subjects, including on the implications of constructing on waterbodies.Other portions of this wetland have been sacrificed to accommodate the IT corridor. But water offers no exemption to elite industry. Unmindful of the lofty intellectuals at work in the glass and steel buildings of the software parks, rainwater goes by habit to occupy its old haunts, bringing the back-office work of American banks to a grinding halt."[Source: http://scroll.in/article/769928/chennai-floods-are-not-a-natural-disaster-theyve-been-created-by-unrestrained-construction] Flood Relief CampsTo provide emergency assistance, the Tamil Nadu government has set up several flood relief camps in the flood affected areas. They provide food, shelter and the basic necessities to thousands of people displaced by the floods. The locations of the flood relief camps was obtained from http://cleanchennai.com/floodrelief/2015/12/09/relief-centers-as-on-8-dec-2015/ and published to the GIS as a layer, that is visualized below: ###Code relief_centers = gis.content.search("Chennai Relief Centers")[0] reliefmap = gis.map("Chennai") reliefmap ###Output _____no_output_____ ###Markdown Assign an optional JSON paramter to specify its opacity, e.g. `reliefmap.add_layer(chennaipop, {"opacity":0.5})` or else just add the layer with no transparency. ###Code reliefmap.add_layer(chennaipop, {"opacity":0.5}) reliefmap.add_layer(relief_centers) ###Output _____no_output_____ ###Markdown Let us read the relief center layer as a pandas dataframe to analyze the data further ###Code relief_data = relief_centers.layers[0].query().sdf relief_data.head() relief_data['No_of_pers'].sum() relief_data['No_of_pers'].describe() relief_data['No_of_pers'].hist() ###Output _____no_output_____ ###Markdown In our dataset, each row represents a relief camp location. To quickly get the dimensions (rows & columns) of our data frame, we use the `shape` property ###Code relief_data.shape ###Output _____no_output_____ ###Markdown As of 8th December, 2015, there were 31,478 people in the 136 relief camps. Let's aggregate them by the district the camp is located in. To accomplish this, we use the `aggregate_points` tool. ###Code chennai_pop_featurelayer = chennaipop.layers[0] res = arcgis.features.summarize_data.aggregate_points( relief_centers, chennai_pop_featurelayer, False, ["No_of_pers Sum"]) aggr_lyr = res['aggregated_layer'] reliefmap.add_layer(aggr_lyr, { "renderer": "ClassedSizeRenderer", "field_name":"SUM_No_of_pers"}) df = aggr_lyr.query().sdf df.head() ###Output _____no_output_____ ###Markdown Let us represent the aggreate result as a table: ###Code df = aggr_lyr.query().sdf df2 = df[['NAME', 'SUM_No_of_pers']] df2.set_index('NAME', inplace=True) df2 df2.plot(kind='bar') ###Output _____no_output_____ ###Markdown Routing Emergency Supplies to Relief Camps A centralized location has been established at Nehru Stadium to organise the relief materials collected from various organizations and volunteers. From there, the relief material is distributed to the needy flood affected people.The GIS provided routing tools that can help plan routes of the relief trucks from the center to relief camps: ###Code routemap = gis.map("Chennai") routemap nehru_stadium = geocode('Jawaharlal Nehru Stadium, Chennai')[0] routemap.draw(nehru_stadium, {"title": "Nehru Stadium", "content": "Chennai Flood Relief Center"}) start_time = datetime.datetime(2015, 12, 13, 9, 0) routes = arcgis.features.use_proximity.plan_routes( relief_centers, 15, 15, start_time, nehru_stadium, stop_service_time=30) routemap.add_layer(routes['routes_layer']) routemap.add_layer(routes['assigned_stops_layer']) ###Output _____no_output_____ ###Markdown Chennai Floods 2015–A Geographic AnalysisOn December 1–2, 2015, the Indian city of Chennai received more rainfall in 24 hours than it had seen on any day since 1901. The deluge followed a month of persistent monsoon rains that were already well above normal for the Indian state of Tamil Nadu. At least 250 people had died, several hundred had been critically injured, and thousands had been affected or displaced by the flooding that has ensued. Table of ContentsChennai Floods 2015–A Geographic AnalysisSummary of this sampleChennai Floods ExplainedHow much rain and where?Spatial AnalysisWhat caused the flooding in Chennai?A wrong call that sank ChennaiFlood Relief CampsRouting Emergency Supplies to Relief Camps The image above provides satellite-based estimates of rainfall over southeastern India on December 1–2, accumulating in 30–minute intervals. The rainfall data is acquired from the Integrated Multi-Satellite Retrievals for GPM (IMERG), a product of the [Global Precipitation Measurement](http://www.nasa.gov/mission_pages/GPM/main/index.html) mission. The brightest shades on the maps represent rainfall totals approaching 400 millimeters (16 inches) during the 48-hour period. These regional, remotely-sensed estimates may differ from the totals measured by ground-based weather stations. According to Hal Pierce, a scientist on the GPM team at NASA’s Goddard Space Flight Center, the highest rainfall totals exceeded 500 mm (20 inches) in an area just off the southeastern coast.[Source: NASA http://earthobservatory.nasa.gov/IOTD/view.php?id=87131] Summary of this sampleThis sample showcases not just the analysis and visualization capabilities of your GIS, but also the ability to store illustrative text, graphics and live code in a Jupyter notebook.The sample starts off reporting the devastating effects of the flood. We plot the locations of rainfall guages and **interpolate** the data to create a continuous surface representing the amount of rainfall throughout the state.Next we plot the locations of major lakes and **trace downstream** the path floods waters would take. We create a **buffer** around this path to demark at risk areas.In the second part of the sample, we take a look at **time series** satellite imagery and observe the human impacts on natural reservoirs over a period of two decades.We then vizualize the locations of relief camps and analyze their capacity using **pandas** and **matplotlib**. We **aggregate** the camps district wise to understand which ones have the largest number of refugees.In the last part, we perform a **routing** analysis to figure out the best path to route emergency supplies from storage to the relief campsFirst, let's import all the necessary libraries and connect to our GIS via an existing profile or creating a new connection by e.g. `gis = GIS("https://www.arcgis.com", "arcgis_python", "P@ssword123")`. ###Code import datetime %matplotlib inline import matplotlib.pyplot as pd from IPython.display import display, YouTubeVideo import arcgis from arcgis.gis import GIS from arcgis.features.analyze_patterns import interpolate_points from arcgis.geocoding import geocode from arcgis.features.find_locations import trace_downstream from arcgis.features.use_proximity import create_buffers gis = GIS('home') ###Output _____no_output_____ ###Markdown Chennai Floods Explained ###Code YouTubeVideo('x4dNIfx6HVs') ###Output _____no_output_____ ###Markdown The catastrophic flooding in Chennai is the result of the heaviest rain in several decades, which forced authorities to release a massive 30,000 cusecs from the Chembarambakkam reservoir into the Adyar river over two days, causing it to flood its banks and submerge neighbourhoods on both sides. It did not help that the Adyar’s stream is not very deep or wide, and its banks have been heavily encroached upon over the years.Similar flooding triggers were in action at Poondi and Puzhal reservoirs, and the Cooum river that winds its way through the city.While Chief Minister J Jayalalithaa said, during the earlier phase of heavy rain last month, that damage during the monsoon was “inevitable”, the fact remains that the mindless development of Chennai over the last two decades — the filling up of lowlands and choking of stormwater drains and other exits for water — has played a major part in the escalation of the crisis.[Source: Indian Express http://indianexpress.com/article/explained/why-is-chennai-under-water/sthash.LlhnqM4B.dpuf] How much rain and where? To get started with our analysis, we bring in a map of the affected region. The map is a live widget that is internally using the ArcGIS JavaScript API. ###Code map = gis.map("Chennai") map ###Output _____no_output_____ ###Markdown We can search for content in our GIS and add layers to our map that can be used for visualization or analysis: ###Code chennaipop = gis.content.search("Chennai_Population", item_type="Feature Layer", outside_org=True)[0] chennaipop ###Output _____no_output_____ ###Markdown Assign an optional JSON paramter to specify its opacity, e.g. `map.add_layer(chennaipop, {"opacity":0.7})` or else just add the layer with no transparency. ###Code map.add_layer(chennaipop, {"renderer":"ClassedColorRenderer", "field_name": "TOTPOP_CY", "opacity":0.7}) ###Output _____no_output_____ ###Markdown To get a sense of how much it rained and where, let's use rainfall data for December 2nd 2015, obtained from the Regional Meteorological Center in Chennai. Tabular data is hard to visualize, so let's bring in a map from our GIS to visualize the data: ###Code search_rainfall = gis.content.search("Chennai_precipitation", item_type="Feature Layer", outside_org=True) if len(search_rainfall) >= 1: rainfall = search_rainfall[0] else: # if the "Chennai_precipitation" web layer does not exist print("Web Layer does not exist. Re-publishing...") # import any pandas data frame, with an address field, as a layer in our GIS import pandas as pds df = pds.read_csv('data/Chennai_precipitation.csv') # Create an arcgis.features.FeatureCollection object by importing the pandas dataframe with an address field rainfall = gis.content.import_data(df, {"Address" : "LOCATION"}) map2 = gis.map("Tamil Nadu, India") map2 ###Output _____no_output_____ ###Markdown We then add this layer to our map to see the locations of the weather stations from which the rainfall data was collected: ###Code map2.add_layer(rainfall, {"renderer":"ClassedSizeRenderer", "field_name":"RAINFALL" }) ###Output _____no_output_____ ###Markdown Here we used the **smart mapping** capability of the GIS to automatically render the data with proportional symbols. Spatial AnalysisRainfall is a continuous phenonmenon that affects the whole region, not just the locations of the weather stations. Based on the observed rainfall at the monitoring stations and their locations, we can interpolate and deduce the approximate rainfall across the whole region. We use the **Interpolate Points** tool from the GIS's spatial analysis service for this.The Interpolate Points tool uses empirical Bayesian kriging to perform the interpolation. ###Code interpolated_rf = interpolate_points(rainfall, field='RAINFALL') ###Output _____no_output_____ ###Markdown Let us create another map of Tamil Nadu state and render the output from Interpolate Points tool ###Code intmap = gis.map("Tamil Nadu") intmap intmap.add_layer(interpolated_rf['result_layer']) ###Output _____no_output_____ ###Markdown We see that rainfall was most severe in and around Chennai as well some parts of central Tamil Nadu. What caused the flooding in Chennai? A wrong call that sank ChennaiMuch of the flooding and subsequent waterlogging was a consequence of the outflows from major reservoirs into swollen rivers and into the city following heavy rains. The release of waters from the Chembarambakkam reservoir in particular has received much attention. [Source: The Hindu, http://www.thehindu.com/news/cities/chennai/chennai-floods-a-wrong-call-that-sank-the-city/article7967371.ece] ###Code lakemap = gis.map("Chennai") lakemap.height='450px' lakemap ###Output _____no_output_____ ###Markdown Let's have look at the major lakes and water reservoirs that were filled to the brim in Chennai due the rains. We plot the locations of some of the reservoirs that had a large outflow during the rains:To plot the locations, we use geocoding tools from the `tools` module. Your GIS can have more than 1 geocoding service, for simplicity, the sample below chooses the first available geocoder to perform an address search ###Code lakemap.draw(geocode("Chembarambakkam, Tamil Nadu")[0], {"title": "Chembarambakkam", "content": "Water reservoir"}) lakemap.draw(geocode("Puzhal Lake, Tamil Nadu")[0], {"title": "Puzhal", "content": "Water reservoir"}) lakemap.draw(geocode("Kannampettai, Tamil Nadu")[0], {"title": "Poondi Lake ", "content": "Water reservoir"}) ###Output _____no_output_____ ###Markdown To identify the flood prone areas, let's trace the path that the water would take when released from the lakes. To do this, we first bring in a layer of lakes in Chennai: ###Code search_results = gis.content.search("Chennai_lakes", item_type="Feature Layer", outside_org=True) search_results chennai_lakes = search_results[2] chennai_lakes ###Output _____no_output_____ ###Markdown Now, let's call the **`Trace Downstream`** analysis tool from the GIS: ###Code downstream = trace_downstream(chennai_lakes) downstream.query() ###Output _____no_output_____ ###Markdown The areas surrounding the trace paths are most prone to flooding and waterlogging. To identify the areas that were at risk, we buffer the traced flow paths by one mile in each direction and visualize it on the map. We see that large areas of the city of Chennai were susceptible to flooding and waterlogging. ###Code floodprone_buffer = create_buffers(downstream, [ 1 ], units='Miles') lakemap.add_layer(floodprone_buffer) ###Output _____no_output_____ ###Markdown Nature's fury or human made disaster?"It is easy to attribute the devastation from unexpected flooding to the results of nature and climate change when in fact it is a result of poor planning and infrastructure. In Chennai, as in several cities across the country, we are experiencing the wanton destruction of our natural buffer zones—rivers, creeks, estuaries, marshlands, lakes—in the name of urban renewal and environmental conservation.The recent floods in Chennai are a fallout of real estate riding roughshod over the city’s waterbodies. Facilitated by an administration that tweaked and modified building rules and urban plans, the real estate boom has consumed the city’s lakes, ponds, tanks and large marshlands.The Ennore creek that used to be home to sprawling mangroves is fast disappearing with soil dredged from the sea being dumped there. The Kodungaiyur dump site in the Madhavaram–Manali wetlands is one of two municipal landfills that service the city. Velachery and Pallikaranai marshlands are a part of the Kovalam basin that was the southern-most of the four river basins for the city. Today, the slightest rains cause flooding and water stagnation in Velachery, home to the city’s largest mall, several other commercial and residential buildings, and also the site where low income communities were allocated land.The Pallikaranai marshlands, once a site for beautiful migratory birds, are now home to the second of the two landfills in the city where the garbage is rapidly leeching into the water and killing the delicate ecosystem."[Source: Chennai's Rain Check https://www.epw.in/journal/2015/49/commentary/chennais-rain-check.html]There are several marshlands and mangroves in the Chennai region that act as natural buffer zones to collect rain water. Let's see the human impact on Pallikaranai marshland over the last decade by comparing satellite images. ###Code def exact_search(my_gis, title, owner_value, item_type_value, max_items_value=20): final_match = None search_result = my_gis.content.search(query= title + ' AND owner:' + owner_value, item_type=item_type_value, max_items=max_items_value, outside_org=True) if "Imagery Layer" in item_type_value: item_type_value = item_type_value.replace("Imagery Layer", "Image Service") elif "Layer" in item_type_value: item_type_value = item_type_value.replace("Layer", "Service") for result in search_result: if result.title == title: final_match = result break return final_match ls_water = exact_search(gis, 'Landsat GLS Multispectral', 'esri', 'Imagery Layer') ls_water ###Output _____no_output_____ ###Markdown Lets us see how the Pallikaranai marshland has changed over the past few decades, and how this has also contributed to the flooding. We create two maps and load the Land / Water Boundary layer to visualize this. This image layer is time enabled, and the map widget gives you the ability to navigate this dataset via time as well. ###Code ls_water_lyr = ls_water.layers[0] from arcgis.geocoding import geocode area = geocode("Tamil Nadu, India", out_sr=ls_water_lyr.properties.extent.spatialReference)[0] ls_water_lyr.extent = area['extent'] ###Output _____no_output_____ ###Markdown In the cell below, we will use a band combination [5,4,3] (a.k.a. mid-IR (Band 5), near-IR (Band 4) and red (Band 3)) of Landsat to provide definition of land-water boundaries and highlights subtle details not readily apparent in the visible bands alone. The reason that we use more infrared bands is to locate inland lakes and streams with greater precision. Generally, the wetter the soil, the darker it appears, because of the infrared absorption capabilities of water. ###Code # data source option from arcgis.raster.functions import stretch, extract_band target_img_layer = stretch(extract_band(ls_water_lyr, [5,4,3]), stretch_type="percentclip", gamma=[1,1,1], dra=True) ###Output _____no_output_____ ###Markdown Use the cell below to filter imageries based on the temporal conditions, and export the filtered results as local images, then show comparatively with other time range. You can either use the where clause e.g. `where="(Year = " + str(start_year) + ")",` or use the temporal filter as shown below. ###Code import pandas as pd from arcgis import geometry import datetime as dt def filter_images(my_map, start_year, end_year): selected = target_img_layer.filter_by(where="(Category = 1) AND (CloudCover <=0.2)", time=[dt.datetime(start_year, 1, 1), dt.datetime(end_year, 1, 1)], geometry=arcgis.geometry.filters.intersects(ls_water_lyr.extent)) my_map.add_layer(selected) fs = selected.query(out_fields="AcquisitionDate, GroupName, Month, DayOfYear, WRS_Row, WRS_Path") tdf = fs.sdf return tdf ###Output _____no_output_____ ###Markdown First, search for qualified satellite imageries (tiles) intersecting with the area of interest at year 1991. ###Code satmap1 = gis.map("Pallikaranai, Tamil Nadu, India", 13) df = filter_images(satmap1, 1991, 1992) df.head() ###Output _____no_output_____ ###Markdown Then search for satellite imageries intersecting with the area of interest at 2009. ###Code satmap2 = gis.map("Pallikaranai, Tamil Nadu, India", 13) df = filter_images(satmap2, 2009, 2010) df.head() from ipywidgets import * satmap1.layout=Layout(flex='1 1', padding='10px', height='300px') satmap2.layout=Layout(flex='1 1', padding='10px', height='300px') box = HBox([satmap1, satmap2]) box ###Output _____no_output_____ ###Markdown The human impact on the marshland is all too apparent in the satellite images. The marshland has shrunk to less than a third of its size in just two decades."Not long ago, it was a 50-square-kilometre water sprawl in the southern suburbs of Chennai. Now, it is 4.3 square kilometres – less than a tenth of its original. The growing finger of a garbage dump sticks out like a cancerous tumour in the northern part of the marshland. Two major roads cut through the waterbody with few pitifully small culverts that are not up to the job of transferring the rain water flows from such a large catchment. The edges have been eaten into by institutes like the National Institute of Ocean Technology. Ironically, NIOT is an accredited consultant to prepare Environmental Impact Assessments on various subjects, including on the implications of constructing on waterbodies.Other portions of this wetland have been sacrificed to accommodate the IT corridor. But water offers no exemption to elite industry. Unmindful of the lofty intellectuals at work in the glass and steel buildings of the software parks, rainwater goes by habit to occupy its old haunts, bringing the back-office work of American banks to a grinding halt."[Source: http://scroll.in/article/769928/chennai-floods-are-not-a-natural-disaster-theyve-been-created-by-unrestrained-construction] Flood Relief CampsTo provide emergency assistance, the Tamil Nadu government has set up several flood relief camps in the flood affected areas. They provide food, shelter and the basic necessities to thousands of people displaced by the floods. The locations of the flood relief camps was obtained from http://cleanchennai.com/floodrelief/2015/12/09/relief-centers-as-on-8-dec-2015/ and published to the GIS as a layer, that is visualized below: ###Code relief_centers = gis.content.search("Chennai Relief Centers")[0] reliefmap = gis.map("Chennai") reliefmap ###Output _____no_output_____ ###Markdown Assign an optional JSON paramter to specify its opacity, e.g. `reliefmap.add_layer(chennaipop, {"opacity":0.5})` or else just add the layer with no transparency. ###Code reliefmap.add_layer(chennaipop, {"opacity":0.5}) reliefmap.add_layer(relief_centers) ###Output _____no_output_____ ###Markdown Let us read the relief center layer as a pandas dataframe to analyze the data further ###Code relief_data = relief_centers.layers[0].query().sdf relief_data.head() relief_data['No_of_pers'].sum() relief_data['No_of_pers'].describe() relief_data['No_of_pers'].hist() ###Output _____no_output_____ ###Markdown In our dataset, each row represents a relief camp location. To quickly get the dimensions (rows & columns) of our data frame, we use the `shape` property ###Code relief_data.shape ###Output _____no_output_____ ###Markdown As of 8th December, 2015, there were 31,478 people in the 136 relief camps. Let's aggregate them by the district the camp is located in. To accomplish this, we use the `aggregate_points` tool. ###Code chennai_pop_featurelayer = chennaipop.layers[0] res = arcgis.features.summarize_data.aggregate_points( relief_centers, chennai_pop_featurelayer, False, ["No_of_pers Sum"]) aggr_lyr = res['aggregated_layer'] reliefmap.add_layer(aggr_lyr, { "renderer": "ClassedSizeRenderer", "field_name":"SUM_No_of_pers"}) df = aggr_lyr.query().sdf df.head() ###Output _____no_output_____ ###Markdown Let us represent the aggreate result as a table: ###Code df = aggr_lyr.query().sdf df2 = df[['NAME', 'SUM_No_of_pers']] df2.set_index('NAME', inplace=True) df2 df2.plot(kind='bar') ###Output _____no_output_____ ###Markdown Routing Emergency Supplies to Relief Camps A centralized location has been established at Nehru Stadium to organise the relief materials collected from various organizations and volunteers. From there, the relief material is distributed to the needy flood affected people.The GIS provided routing tools that can help plan routes of the relief trucks from the center to relief camps: ###Code routemap = gis.map("Chennai") routemap nehru_stadium = geocode('Jawaharlal Nehru Stadium, Chennai')[0] routemap.draw(nehru_stadium, {"title": "Nehru Stadium", "content": "Chennai Flood Relief Center"}) start_time = datetime.datetime(2015, 12, 13, 9, 0) routes = arcgis.features.use_proximity.plan_routes( relief_centers, 15, 15, start_time, nehru_stadium, stop_service_time=30) routemap.add_layer(routes['routes_layer']) routemap.add_layer(routes['assigned_stops_layer']) ###Output _____no_output_____ ###Markdown Chennai Floods 2015 - a geographic analysisOn December 1–2, 2015, the Indian city of Chennai received more rainfall in 24 hours than it had seen on any day since 1901. The deluge followed a month of persistent monsoon rains that were already well above normal for the Indian state of Tamil Nadu. At least 250 people had died, several hundred had been critically injured, and thousands had been affected or displaced by the flooding that has ensued. The animation above provides satellite-based estimates of rainfall over southeastern India on December 1–2, accumulating in 30–minute intervals. The rainfall data is acquired from the Integrated Multi-Satellite Retrievals for GPM (IMERG), a product of the [Global Precipitation Measurement](http://www.nasa.gov/mission_pages/GPM/main/index.html) mission. The brightest shades on the maps represent rainfall totals approaching 400 millimeters (16 inches) during the 48-hour period. These regional, remotely-sensed estimates may differ from the totals measured by ground-based weather stations. According to Hal Pierce, a scientist on the GPM team at NASA’s Goddard Space Flight Center, the highest rainfall totals exceeded 500 mm (20 inches) in an area just off the southeastern coast.[Source: NASA http://earthobservatory.nasa.gov/IOTD/view.php?id=87131] Summary of this sampleThis sample showcases not just the analysis and visualization capabilities of your GIS but also the ability to store illustrative text, graphics and live code in a Jupyter notebook.The sample starts off reporting the devastating effects of the flood. We plot the locations of rainfall guages and **interpolate** the data to create a continuous surface representing the amount of rainfall throughout the state.Next we plot the locations of major lakes and **trace downstream** the path floods waters would take. We create a **buffer** around this path to demark at risk areas.In the second part of the sample, we take a look at **time series** satellite imagery and observe the human impacts on natural reservoirs over a period of two decades.We then vizualize the locations of relief camps and analyze their capacity using **pandas** and **matplotlib**. We **aggregate** the camps district wise to understand which ones have the largest number of refugees.In the last part, we perform a **routing** analysis to figure out the best path to route emergency supplies from storage to the relif camps Chennai Floods Explained ###Code from IPython.display import YouTubeVideo YouTubeVideo('x4dNIfx6HVs') ###Output _____no_output_____ ###Markdown The catastrophic flooding in Chennai is the result of the heaviest rain in several decades, which forced authorities to release a massive 30,000 cusecs from the Chembarambakkam reservoir into the Adyar river over two days, causing it to flood its banks and submerge neighbourhoods on both sides. It did not help that the Adyar’s stream is not very deep or wide, and its banks have been heavily encroached upon over the years.Similar flooding triggers were in action at Poondi and Puzhal reservoirs, and the Cooum river that winds its way through the city.While Chief Minister J Jayalalithaa said, during the earlier phase of heavy rain last month, that damage during the monsoon was “inevitable”, the fact remains that the mindless development of Chennai over the last two decades — the filling up of lowlands and choking of stormwater drains and other exits for water — has played a major part in the escalation of the crisis.[Source: Indian Express http://indianexpress.com/article/explained/why-is-chennai-under-water/sthash.LlhnqM4B.dpuf] How much rain and where? To get started with our analysis, we connect to our GIS and bring in a map of the affected region. The map is a live widget that is internally using the ArcGIS JavaScript API that powers [ArcGIS.com](http://www.arcgis.com). ###Code import arcgis from arcgis.gis import GIS from IPython.display import display gis = GIS("https://www.arcgis.com", "arcgis_python", "P@ssword123") map = gis.map("Chennai", zoomlevel = 8) map ###Output _____no_output_____ ###Markdown We can search for content in our GIS and add layers to our map that can be used for visualization or analysis: ###Code chennaipop = gis.content.search("Chennai_Population", item_type="feature service", outside_org=True)[0] chennaipop map.add_layer(chennaipop) ###Output _____no_output_____ ###Markdown To get a sense of how much it rained and where, let's use rainfall data for December 2nd 2015, obtained from the Regional Meteorological Center in Chennai. The data is in chennai-rainfall.csv file, that we load into a Pandas data frame, and list its contents: ###Code import pandas as pd df = pd.read_csv('data/chennai-rainfall.csv') df.head() ###Output _____no_output_____ ###Markdown Tabular data is hard to visualize, so let's bring in a map from our GIS to visualize the data: ###Code map = gis.map("Tamil Nadu", zoomlevel=7) map ###Output _____no_output_____ ###Markdown We can import any pandas data frame, with an address field, as a layer in our GIS. We then add this layer to our map to see the locations of the weather stations from which the rainfall data was collected: ###Code # Create an arcgis.features.FeatureCollection object by importing the pandas dataframe with an address field rainfall = gis.content.import_data(df, {"Address" : "LOCATION"}) # The FeatureCollection can be added to the map using add_layer() method, just like regular portal items map.add_layer(rainfall, { "renderer":"ClassedSizeRenderer", "field_name":"RAINFALL" }) ###Output _____no_output_____ ###Markdown Here we used the **smart mapping** capability of the GIS to automatically render the data with proportional symbols. To learn more about smart mapping, visit the sample titled 'Smart Mapping' under the section '05 Power Users & Developers'. Spatial AnalysisRainfall is a continuous phenonmenon that affects the whole region, not just the locations of the weather stations. Based on the observed rainfall at the monitoring stations and their locations, we can interpolate and deduce the approximate rainfall across the whole region. We use the **Interpolate Points** tool from the GIS's spatial analysis service for this.The Interpolate Points tool uses empirical Bayesian kriging to perform the interpolation. ###Code from arcgis.features.analyze_patterns import interpolate_points interpolated_rf = interpolate_points(rainfall, field='RAINFALL') ###Output _____no_output_____ ###Markdown Let us create another map of Tamil Nadu state and render the output from Interpolate Points tool ###Code intmap = gis.map("Tamil Nadu", zoomlevel=7) intmap intmap.add_layer(interpolated_rf['result_layer']) ###Output _____no_output_____ ###Markdown We see that rainfall was most severe in and around Chennai as well some parts of central Tamil Nadu. What caused the flooding in Chennai? A wrong call that sank ChennaiMuch of the flooding and subsequent waterlogging was a consequence of the outflows from major reservoirs into swollen rivers and into the city following heavy rains. The release of waters from the Chembarambakkam reservoir in particular has received much attention. [Source: The Hindu, http://www.thehindu.com/news/cities/chennai/chennai-floods-a-wrong-call-that-sank-the-city/article7967371.ece] ###Code lakemap = gis.map("Chennai", zoomlevel=11) lakemap.height='450px' lakemap ###Output _____no_output_____ ###Markdown Let's have look at the major lakes and water reservoirs that were filled to the brim in Chennai due the rains. We plot the locations of some of the reservoirs that had a large outflow during the rains:To plot the locations, we use geocoding tools from the `tools` module. Your GIS can have more than 1 geocoding service, for simplicity, the sample below chooses the first available geocoder to perform an address search ###Code from arcgis.geocoding import geocode lakemap.draw(geocode("Chembarambakkam, Tamil Nadu")[0], {"title": "Chembarambakkam", "content": "Water reservoir"}) lakemap.draw(geocode("Puzhal Lake, Tamil Nadu")[0], {"title": "Puzhal", "content": "Water reservoir"}) lakemap.draw(geocode("Kannampettai, Tamil Nadu")[0], {"title": "Poondi Lake ", "content": "Water reservoir"}) ###Output _____no_output_____ ###Markdown To identify the flood prone areas, let's trace the path that the water would take when released from the lakes. To do this, we first bring in a layer of lakes in Chennai, and call the **`Trace Downstream`** analysis tool from the GIS: ###Code chennai_lakes = gis.content.search("Chennai Lakes", "feature collection", outside_org=True)[0] chennai_lakes ###Output _____no_output_____ ###Markdown The areas surrounding the trace paths are most prone to flooding and waterlogging. To identify the areas that were at risk, we buffer the traced flow paths by one mile in each direction and visualize it on the map. We see that large areas of the city of Chennai were susceptible to flooding and waterlogging. ###Code from arcgis.features.find_locations import trace_downstream from arcgis.features.use_proximity import create_buffers floodprone_buffer = create_buffers(trace_downstream(chennai_lakes), [ 1 ], units='Miles') lakemap.add_layer(floodprone_buffer) ###Output _____no_output_____ ###Markdown Nature's fury or human made disaster?"It is easy to attribute the devastation from unexpected flooding to the results of nature and climate change when in fact it is a result of poor planning and infrastructure. In Chennai, as in several cities across the country, we are experiencing the wanton destruction of our natural buffer zones—rivers, creeks, estuaries, marshlands, lakes—in the name of urban renewal and environmental conservation.The recent floods in Chennai are a fallout of real estate riding roughshod over the city’s waterbodies. Facilitated by an administration that tweaked and modified building rules and urban plans, the real estate boom has consumed the city’s lakes, ponds, tanks and large marshlands.The Ennore creek that used to be home to sprawling mangroves is fast disappearing with soil dredged from the sea being dumped there. The Kodungaiyur dump site in the Madhavaram–Manali wetlands is one of two municipal landfills that service the city. Velachery and Pallikaranai marshlands are a part of the Kovalam basin that was the southern-most of the four river basins for the city. Today, the slightest rains cause flooding and water stagnation in Velachery, home to the city’s largest mall, several other commercial and residential buildings, and also the site where low income communities were allocated land.The Pallikaranai marshlands, once a site for beautiful migratory birds, are now home to the second of the two landfills in the city where the garbage is rapidly leeching into the water and killing the delicate ecosystem."[Source: Chennai's Rain Check http://www.epw.in/commentary/chennais-rain-check.html]There are several marshlands and mangroves in the Chennai region that act as natural buffer zones to collect rain water. Let's see the human impact on Pallikaranai marshland over the last decade by comparing satellite images. ###Code ls_water = gis.content.search("Land Water Boundary (453) 1990-2010", max_items=1, outside_org = True)[0] ls_water ###Output _____no_output_____ ###Markdown Lets us see how the Pallikaranai marshland has changed over the past few decades, and how this has also contributed to the flooding. We create two maps and load the Land / Water Boundary layer to visualize this. This image layer is time enabled, and the map widget gives you the ability to navigate this dataset via time as well. ###Code satmap1 = gis.map("Pallikaranai, Tamil Nadu, India", zoomlevel=13) satmap1.add_layer(ls_water) satmap1.set_time_extent('1/1/1989 UTC', '1/1/1990 UTC') satmap2 = gis.map("Pallikaranai, Tamil Nadu, India", zoomlevel=13) satmap2.add_layer(ls_water) satmap2.set_time_extent('1/1/2009 UTC', '1/1/2010 UTC') from ipywidgets import * satmap1.layout=Layout(flex='1 1', padding='10px') satmap2.layout=Layout(flex='1 1', padding='10px') box = HBox([satmap1, satmap2]) box ###Output _____no_output_____ ###Markdown The human impact on the marshland is all too apparent in the satellite images. The marshland has shrunk to less than a third of its size in just two decades."Not long ago, it was a 50-square-kilometre water sprawl in the southern suburbs of Chennai. Now, it is 4.3 square kilometres – less than a tenth of its original. The growing finger of a garbage dump sticks out like a cancerous tumour in the northern part of the marshland. Two major roads cut through the waterbody with few pitifully small culverts that are not up to the job of transferring the rain water flows from such a large catchment. The edges have been eaten into by institutes like the National Institute of Ocean Technology. Ironically, NIOT is an accredited consultant to prepare Environmental Impact Assessments on various subjects, including on the implications of constructing on waterbodies.Other portions of this wetland have been sacrificed to accommodate the IT corridor. But water offers no exemption to elite industry. Unmindful of the lofty intellectuals at work in the glass and steel buildings of the software parks, rainwater goes by habit to occupy its old haunts, bringing the back-office work of American banks to a grinding halt."[Source: http://scroll.in/article/769928/chennai-floods-are-not-a-natural-disaster-theyve-been-created-by-unrestrained-construction] Flood Relief CampsTo provide emergency assistance, the Tamil Nadu government has set up several flood relief camps in the flood affected areas. They provide food, shelter and the basic necessities to thousands of people displaced by the floods. The locations of the flood relief camps was obtained from http://cleanchennai.com/floodrelief/2015/12/09/relief-centers-as-on-8-dec-2015/ and published to the GIS as a layer, that is visualized below: ###Code relief_centers = gis.content.search("Chennai Relief Centers", item_type="Feature Collection", outside_org=True)[0] reliefmap = gis.map("Chennai", zoomlevel=10) reliefmap reliefmap.add_layer(chennaipop) reliefmap.add_layer(relief_centers) ###Output _____no_output_____ ###Markdown Let us read the relief center layer as a pandas dataframe to analyze the data further ###Code relief_data = relief_centers.layers[0].query().df relief_data.head() relief_data['No_of_persons'].sum() relief_data['No_of_persons'].describe() %matplotlib inline import matplotlib.pyplot as pd relief_data['No_of_persons'].hist() ###Output _____no_output_____ ###Markdown In our dataset, each row represents a relief camp location. To quickly get the dimensions (rows & columns) of our data frame, we use the `shape` property ###Code relief_data.shape ###Output _____no_output_____ ###Markdown As of 8th December, 2015, there were 31,478 people in the 136 relief camps. Let's aggregate them by the district the camp is located in. To accomplish this, we use the `aggregate_points` tool. ###Code chennai_pop_featurelayer = chennaipop.layers[0] res = arcgis.features.summarize_data.aggregate_points(relief_centers, chennai_pop_featurelayer, False, ["No_of_persons Sum"]) aggr_lyr = res['aggregated_layer'] reliefmap.add_layer(aggr_lyr, { "renderer": "ClassedSizeRenderer", "field_name":"SUM_No_of_persons"}) ###Output _____no_output_____ ###Markdown Let us represent the aggreate result as a table: ###Code df = aggr_lyr.query().df df2 = df[['NAME', 'SUM_No_of_persons']] df2.set_index('NAME', inplace=True) df2 df2.plot(kind='bar') ###Output _____no_output_____ ###Markdown Routing Emergency Supplies to Relief Camps A centralized location has been established at Nehru Stadium to organise the relief materials collected from various organizations and volunteers. From there, the relief material is distributed to the needy flood affected people.The GIS provided routing tools that can help plan routes of the relief trucks from the center to relief camps: ###Code routemap = gis.map("Chennai", zoomlevel = 12) routemap nehru_stadium = geocode('Jawaharlal Nehru Stadium, Chennai')[0] routemap.draw(nehru_stadium, {"title": "Nehru Stadium", "content": "Chennai Flood Relief Center"}) import datetime start_time = datetime.datetime(2015, 12, 13, 9, 0) routes = arcgis.features.use_proximity.plan_routes(relief_centers, 15, 15, start_time, nehru_stadium, stop_service_time=30) routemap.add_layer(routes['routes_layer']) routemap.add_layer(routes['assigned_stops_layer']) ###Output _____no_output_____ ###Markdown Chennai Floods 2015–A Geographic AnalysisOn December 1–2, 2015, the Indian city of Chennai received more rainfall in 24 hours than it had seen on any day since 1901. The deluge followed a month of persistent monsoon rains that were already well above normal for the Indian state of Tamil Nadu. At least 250 people had died, several hundred had been critically injured, and thousands had been affected or displaced by the flooding that has ensued. Table of ContentsChennai Floods 2015–A Geographic AnalysisSummary of this sampleChennai Floods ExplainedHow much rain and where?Spatial AnalysisWhat caused the flooding in Chennai?A wrong call that sank ChennaiFlood Relief CampsRouting Emergency Supplies to Relief Camps The image above provides satellite-based estimates of rainfall over southeastern India on December 1–2, accumulating in 30–minute intervals. The rainfall data is acquired from the Integrated Multi-Satellite Retrievals for GPM (IMERG), a product of the [Global Precipitation Measurement](http://www.nasa.gov/mission_pages/GPM/main/index.html) mission. The brightest shades on the maps represent rainfall totals approaching 400 millimeters (16 inches) during the 48-hour period. These regional, remotely-sensed estimates may differ from the totals measured by ground-based weather stations. According to Hal Pierce, a scientist on the GPM team at NASA’s Goddard Space Flight Center, the highest rainfall totals exceeded 500 mm (20 inches) in an area just off the southeastern coast.[Source: NASA http://earthobservatory.nasa.gov/IOTD/view.php?id=87131] Summary of this sampleThis sample showcases not just the analysis and visualization capabilities of your GIS, but also the ability to store illustrative text, graphics and live code in a Jupyter notebook.The sample starts off reporting the devastating effects of the flood. We plot the locations of rainfall guages and **interpolate** the data to create a continuous surface representing the amount of rainfall throughout the state.Next we plot the locations of major lakes and **trace downstream** the path floods waters would take. We create a **buffer** around this path to demark at risk areas.In the second part of the sample, we take a look at **time series** satellite imagery and observe the human impacts on natural reservoirs over a period of two decades.We then vizualize the locations of relief camps and analyze their capacity using **pandas** and **matplotlib**. We **aggregate** the camps district wise to understand which ones have the largest number of refugees.In the last part, we perform a **routing** analysis to figure out the best path to route emergency supplies from storage to the relief campsFirst, let's import all the necessary libraries and connect to our GIS via an existing profile or creating a new connection by e.g. `gis = GIS("https://www.arcgis.com", "arcgis_python", "P@ssword123")`. ###Code import datetime %matplotlib inline import matplotlib.pyplot as pd from IPython.display import display, YouTubeVideo import arcgis from arcgis.gis import GIS from arcgis.features.analyze_patterns import interpolate_points from arcgis.geocoding import geocode from arcgis.features.find_locations import trace_downstream from arcgis.features.use_proximity import create_buffers gis = GIS(profile = "your_online_profile") ###Output _____no_output_____ ###Markdown Chennai Floods Explained ###Code YouTubeVideo('x4dNIfx6HVs') ###Output _____no_output_____ ###Markdown The catastrophic flooding in Chennai is the result of the heaviest rain in several decades, which forced authorities to release a massive 30,000 cusecs from the Chembarambakkam reservoir into the Adyar river over two days, causing it to flood its banks and submerge neighbourhoods on both sides. It did not help that the Adyar’s stream is not very deep or wide, and its banks have been heavily encroached upon over the years.Similar flooding triggers were in action at Poondi and Puzhal reservoirs, and the Cooum river that winds its way through the city.While Chief Minister J Jayalalithaa said, during the earlier phase of heavy rain last month, that damage during the monsoon was “inevitable”, the fact remains that the mindless development of Chennai over the last two decades — the filling up of lowlands and choking of stormwater drains and other exits for water — has played a major part in the escalation of the crisis.[Source: Indian Express http://indianexpress.com/article/explained/why-is-chennai-under-water/sthash.LlhnqM4B.dpuf] How much rain and where? To get started with our analysis, we bring in a map of the affected region. The map is a live widget that is internally using the ArcGIS JavaScript API. ###Code map = gis.map("Chennai") map ###Output _____no_output_____ ###Markdown We can search for content in our GIS and add layers to our map that can be used for visualization or analysis: ###Code chennaipop = gis.content.search("Chennai_Population", item_type="Feature Layer", outside_org=True)[0] chennaipop ###Output _____no_output_____ ###Markdown Assign an optional JSON paramter to specify its opacity, e.g. `map.add_layer(chennaipop, {"opacity":0.7})` or else just add the layer with no transparency. ###Code map.add_layer(chennaipop, {"renderer":"ClassedColorRenderer", "field_name": "TOTPOP_CY", "opacity":0.7}) ###Output _____no_output_____ ###Markdown To get a sense of how much it rained and where, let's use rainfall data for December 2nd 2015, obtained from the Regional Meteorological Center in Chennai. Tabular data is hard to visualize, so let's bring in a map from our GIS to visualize the data: ###Code search_rainfall = gis.content.search("Chennai_precipitation", item_type="Feature Layer", outside_org=True) if len(search_rainfall) >= 1: rainfall = search_rainfall[0] else: # if the "Chennai_precipitation" web layer does not exist print("Web Layer does not exist. Re-publishing...") # import any pandas data frame, with an address field, as a layer in our GIS import pandas as pds df = pds.read_csv('data/Chennai_precipitation.csv') # Create an arcgis.features.FeatureCollection object by importing the pandas dataframe with an address field rainfall = gis.content.import_data(df, {"Address" : "LOCATION"}) map2 = gis.map("Tamil Nadu, India") map2 ###Output _____no_output_____ ###Markdown We then add this layer to our map to see the locations of the weather stations from which the rainfall data was collected: ###Code map2.add_layer(rainfall, {"renderer":"ClassedSizeRenderer", "field_name":"RAINFALL" }) ###Output _____no_output_____ ###Markdown Here we used the **smart mapping** capability of the GIS to automatically render the data with proportional symbols. Spatial AnalysisRainfall is a continuous phenonmenon that affects the whole region, not just the locations of the weather stations. Based on the observed rainfall at the monitoring stations and their locations, we can interpolate and deduce the approximate rainfall across the whole region. We use the **Interpolate Points** tool from the GIS's spatial analysis service for this.The Interpolate Points tool uses empirical Bayesian kriging to perform the interpolation. ###Code interpolated_rf = interpolate_points(rainfall, field='RAINFALL') ###Output _____no_output_____ ###Markdown Let us create another map of Tamil Nadu state and render the output from Interpolate Points tool ###Code intmap = gis.map("Tamil Nadu") intmap intmap.add_layer(interpolated_rf['result_layer']) ###Output _____no_output_____ ###Markdown We see that rainfall was most severe in and around Chennai as well some parts of central Tamil Nadu. What caused the flooding in Chennai? A wrong call that sank ChennaiMuch of the flooding and subsequent waterlogging was a consequence of the outflows from major reservoirs into swollen rivers and into the city following heavy rains. The release of waters from the Chembarambakkam reservoir in particular has received much attention. [Source: The Hindu, http://www.thehindu.com/news/cities/chennai/chennai-floods-a-wrong-call-that-sank-the-city/article7967371.ece] ###Code lakemap = gis.map("Chennai") lakemap.height='450px' lakemap ###Output _____no_output_____ ###Markdown Let's have look at the major lakes and water reservoirs that were filled to the brim in Chennai due the rains. We plot the locations of some of the reservoirs that had a large outflow during the rains:To plot the locations, we use geocoding tools from the `tools` module. Your GIS can have more than 1 geocoding service, for simplicity, the sample below chooses the first available geocoder to perform an address search ###Code lakemap.draw(geocode("Chembarambakkam, Tamil Nadu")[0], {"title": "Chembarambakkam", "content": "Water reservoir"}) lakemap.draw(geocode("Puzhal Lake, Tamil Nadu")[0], {"title": "Puzhal", "content": "Water reservoir"}) lakemap.draw(geocode("Kannampettai, Tamil Nadu")[0], {"title": "Poondi Lake ", "content": "Water reservoir"}) ###Output _____no_output_____ ###Markdown To identify the flood prone areas, let's trace the path that the water would take when released from the lakes. To do this, we first bring in a layer of lakes in Chennai: ###Code search_results = gis.content.search("Chennai_lakes", item_type="Feature Layer", outside_org=True) search_results chennai_lakes = search_results[2] chennai_lakes ###Output _____no_output_____ ###Markdown Now, let's call the **`Trace Downstream`** analysis tool from the GIS: ###Code downstream = trace_downstream(chennai_lakes) downstream.query() ###Output _____no_output_____ ###Markdown The areas surrounding the trace paths are most prone to flooding and waterlogging. To identify the areas that were at risk, we buffer the traced flow paths by one mile in each direction and visualize it on the map. We see that large areas of the city of Chennai were susceptible to flooding and waterlogging. ###Code floodprone_buffer = create_buffers(downstream, [ 1 ], units='Miles') lakemap.add_layer(floodprone_buffer) ###Output _____no_output_____ ###Markdown Nature's fury or human made disaster?"It is easy to attribute the devastation from unexpected flooding to the results of nature and climate change when in fact it is a result of poor planning and infrastructure. In Chennai, as in several cities across the country, we are experiencing the wanton destruction of our natural buffer zones—rivers, creeks, estuaries, marshlands, lakes—in the name of urban renewal and environmental conservation.The recent floods in Chennai are a fallout of real estate riding roughshod over the city’s waterbodies. Facilitated by an administration that tweaked and modified building rules and urban plans, the real estate boom has consumed the city’s lakes, ponds, tanks and large marshlands.The Ennore creek that used to be home to sprawling mangroves is fast disappearing with soil dredged from the sea being dumped there. The Kodungaiyur dump site in the Madhavaram–Manali wetlands is one of two municipal landfills that service the city. Velachery and Pallikaranai marshlands are a part of the Kovalam basin that was the southern-most of the four river basins for the city. Today, the slightest rains cause flooding and water stagnation in Velachery, home to the city’s largest mall, several other commercial and residential buildings, and also the site where low income communities were allocated land.The Pallikaranai marshlands, once a site for beautiful migratory birds, are now home to the second of the two landfills in the city where the garbage is rapidly leeching into the water and killing the delicate ecosystem."[Source: Chennai's Rain Check https://www.epw.in/journal/2015/49/commentary/chennais-rain-check.html]There are several marshlands and mangroves in the Chennai region that act as natural buffer zones to collect rain water. Let's see the human impact on Pallikaranai marshland over the last decade by comparing satellite images. ###Code def exact_search(my_gis, title, owner_value, item_type_value, max_items_value=20): final_match = None search_result = my_gis.content.search(query= title + ' AND owner:' + owner_value, item_type=item_type_value, max_items=max_items_value, outside_org=True) if "Imagery Layer" in item_type_value: item_type_value = item_type_value.replace("Imagery Layer", "Image Service") elif "Layer" in item_type_value: item_type_value = item_type_value.replace("Layer", "Service") for result in search_result: if result.title == title: final_match = result break return final_match ls_water = exact_search(gis, 'Landsat GLS Multispectral', 'esri', 'Imagery Layer') ls_water ###Output _____no_output_____ ###Markdown Lets us see how the Pallikaranai marshland has changed over the past few decades, and how this has also contributed to the flooding. We create two maps and load the Land / Water Boundary layer to visualize this. This image layer is time enabled, and the map widget gives you the ability to navigate this dataset via time as well. ###Code ls_water_lyr = ls_water.layers[0] from arcgis.geocoding import geocode area = geocode("Tamil Nadu, India", out_sr=ls_water_lyr.properties.extent.spatialReference)[0] ls_water_lyr.extent = area['extent'] ###Output _____no_output_____ ###Markdown In the cell below, we will use a band combination [5,4,3] (a.k.a. mid-IR (Band 5), near-IR (Band 4) and red (Band 3)) of Landsat to provide definition of land-water boundaries and highlights subtle details not readily apparent in the visible bands alone. The reason that we use more infrared bands is to locate inland lakes and streams with greater precision. Generally, the wetter the soil, the darker it appears, because of the infrared absorption capabilities of water. ###Code # data source option from arcgis.raster.functions import stretch, extract_band target_img_layer = stretch(extract_band(ls_water_lyr, [5,4,3]), stretch_type="percentclip", gamma=[1,1,1], dra=True) ###Output _____no_output_____ ###Markdown Use the cell below to filter imageries based on the temporal conditions, and export the filtered results as local images, then show comparatively with other time range. You can either use the where clause e.g. `where="(Year = " + str(start_year) + ")",` or use the temporal filter as shown below. ###Code import pandas as pd from arcgis import geometry import datetime as dt def filter_images(my_map, start_year, end_year): selected = target_img_layer.filter_by(where="(Category = 1) AND (CloudCover <=0.2)", time=[dt.datetime(start_year, 1, 1), dt.datetime(end_year, 1, 1)], geometry=arcgis.geometry.filters.intersects(ls_water_lyr.extent)) my_map.add_layer(selected) fs = selected.query(out_fields="AcquisitionDate, GroupName, Month, DayOfYear, WRS_Row, WRS_Path") tdf = fs.sdf return tdf ###Output _____no_output_____ ###Markdown First, search for qualified satellite imageries (tiles) intersecting with the area of interest at year 1991. ###Code satmap1 = gis.map("Pallikaranai, Tamil Nadu, India", 13) df = filter_images(satmap1, 1991, 1992) df.head() ###Output _____no_output_____ ###Markdown Then search for satellite imageries intersecting with the area of interest at 2009. ###Code satmap2 = gis.map("Pallikaranai, Tamil Nadu, India", 13) df = filter_images(satmap2, 2009, 2010) df.head() from ipywidgets import * satmap1.layout=Layout(flex='1 1', padding='10px', height='300px') satmap2.layout=Layout(flex='1 1', padding='10px', height='300px') box = HBox([satmap1, satmap2]) box ###Output _____no_output_____ ###Markdown The human impact on the marshland is all too apparent in the satellite images. The marshland has shrunk to less than a third of its size in just two decades."Not long ago, it was a 50-square-kilometre water sprawl in the southern suburbs of Chennai. Now, it is 4.3 square kilometres – less than a tenth of its original. The growing finger of a garbage dump sticks out like a cancerous tumour in the northern part of the marshland. Two major roads cut through the waterbody with few pitifully small culverts that are not up to the job of transferring the rain water flows from such a large catchment. The edges have been eaten into by institutes like the National Institute of Ocean Technology. Ironically, NIOT is an accredited consultant to prepare Environmental Impact Assessments on various subjects, including on the implications of constructing on waterbodies.Other portions of this wetland have been sacrificed to accommodate the IT corridor. But water offers no exemption to elite industry. Unmindful of the lofty intellectuals at work in the glass and steel buildings of the software parks, rainwater goes by habit to occupy its old haunts, bringing the back-office work of American banks to a grinding halt."[Source: http://scroll.in/article/769928/chennai-floods-are-not-a-natural-disaster-theyve-been-created-by-unrestrained-construction] Flood Relief CampsTo provide emergency assistance, the Tamil Nadu government has set up several flood relief camps in the flood affected areas. They provide food, shelter and the basic necessities to thousands of people displaced by the floods. The locations of the flood relief camps was obtained from http://cleanchennai.com/floodrelief/2015/12/09/relief-centers-as-on-8-dec-2015/ and published to the GIS as a layer, that is visualized below: ###Code relief_centers = gis.content.search("Chennai Relief Centers")[0] reliefmap = gis.map("Chennai") reliefmap ###Output _____no_output_____ ###Markdown Assign an optional JSON paramter to specify its opacity, e.g. `reliefmap.add_layer(chennaipop, {"opacity":0.5})` or else just add the layer with no transparency. ###Code reliefmap.add_layer(chennaipop, {"opacity":0.5}) reliefmap.add_layer(relief_centers) ###Output _____no_output_____ ###Markdown Let us read the relief center layer as a pandas dataframe to analyze the data further ###Code relief_data = relief_centers.layers[0].query().sdf relief_data.head() relief_data['No_of_pers'].sum() relief_data['No_of_pers'].describe() relief_data['No_of_pers'].hist() ###Output _____no_output_____ ###Markdown In our dataset, each row represents a relief camp location. To quickly get the dimensions (rows & columns) of our data frame, we use the `shape` property ###Code relief_data.shape ###Output _____no_output_____ ###Markdown As of 8th December, 2015, there were 31,478 people in the 136 relief camps. Let's aggregate them by the district the camp is located in. To accomplish this, we use the `aggregate_points` tool. ###Code chennai_pop_featurelayer = chennaipop.layers[0] res = arcgis.features.summarize_data.aggregate_points( relief_centers, chennai_pop_featurelayer, False, ["No_of_pers Sum"]) aggr_lyr = res['aggregated_layer'] reliefmap.add_layer(aggr_lyr, { "renderer": "ClassedSizeRenderer", "field_name":"SUM_No_of_pers"}) df = aggr_lyr.query().sdf df.head() ###Output _____no_output_____ ###Markdown Let us represent the aggreate result as a table: ###Code df = aggr_lyr.query().sdf df2 = df[['NAME', 'SUM_No_of_pers']] df2.set_index('NAME', inplace=True) df2 df2.plot(kind='bar') ###Output _____no_output_____ ###Markdown Routing Emergency Supplies to Relief Camps A centralized location has been established at Nehru Stadium to organise the relief materials collected from various organizations and volunteers. From there, the relief material is distributed to the needy flood affected people.The GIS provided routing tools that can help plan routes of the relief trucks from the center to relief camps: ###Code routemap = gis.map("Chennai") routemap nehru_stadium = geocode('Jawaharlal Nehru Stadium, Chennai')[0] routemap.draw(nehru_stadium, {"title": "Nehru Stadium", "content": "Chennai Flood Relief Center"}) start_time = datetime.datetime(2015, 12, 13, 9, 0) routes = arcgis.features.use_proximity.plan_routes( relief_centers, 15, 15, start_time, nehru_stadium, stop_service_time=30) routemap.add_layer(routes['routes_layer']) routemap.add_layer(routes['assigned_stops_layer']) ###Output _____no_output_____ ###Markdown Chennai Floods 2015 - a geographic analysisOn December 1–2, 2015, the Indian city of Chennai received more rainfall in 24 hours than it had seen on any day since 1901. The deluge followed a month of persistent monsoon rains that were already well above normal for the Indian state of Tamil Nadu. At least 250 people had died, several hundred had been critically injured, and thousands had been affected or displaced by the flooding that has ensued. The animation above provides satellite-based estimates of rainfall over southeastern India on December 1–2, accumulating in 30–minute intervals. The rainfall data is acquired from the Integrated Multi-Satellite Retrievals for GPM (IMERG), a product of the [Global Precipitation Measurement](http://www.nasa.gov/mission_pages/GPM/main/index.html) mission. The brightest shades on the maps represent rainfall totals approaching 400 millimeters (16 inches) during the 48-hour period. These regional, remotely-sensed estimates may differ from the totals measured by ground-based weather stations. According to Hal Pierce, a scientist on the GPM team at NASA’s Goddard Space Flight Center, the highest rainfall totals exceeded 500 mm (20 inches) in an area just off the southeastern coast.[Source: NASA http://earthobservatory.nasa.gov/IOTD/view.php?id=87131] Summary of this sampleThis sample showcases not just the analysis and visualization capabilities of your GIS but also the ability to store illustrative text, graphics and live code in a Jupyter notebook.The sample starts off reporting the devastating effects of the flood. We plot the locations of rainfall guages and **interpolate** the data to create a continuous surface representing the amount of rainfall throughout the state.Next we plot the locations of major lakes and **trace downstream** the path floods waters would take. We create a **buffer** around this path to demark at risk areas.In the second part of the sample, we take a look at **time series** satellite imagery and observe the human impacts on natural reservoirs over a period of two decades.We then vizualize the locations of relief camps and analyze their capacity using **pandas** and **matplotlib**. We **aggregate** the camps district wise to understand which ones have the largest number of refugees.In the last part, we perform a **routing** analysis to figure out the best path to route emergency supplies from storage to the relif camps Chennai Floods Explained ###Code from IPython.display import YouTubeVideo YouTubeVideo('x4dNIfx6HVs') ###Output _____no_output_____ ###Markdown The catastrophic flooding in Chennai is the result of the heaviest rain in several decades, which forced authorities to release a massive 30,000 cusecs from the Chembarambakkam reservoir into the Adyar river over two days, causing it to flood its banks and submerge neighbourhoods on both sides. It did not help that the Adyar’s stream is not very deep or wide, and its banks have been heavily encroached upon over the years.Similar flooding triggers were in action at Poondi and Puzhal reservoirs, and the Cooum river that winds its way through the city.While Chief Minister J Jayalalithaa said, during the earlier phase of heavy rain last month, that damage during the monsoon was “inevitable”, the fact remains that the mindless development of Chennai over the last two decades — the filling up of lowlands and choking of stormwater drains and other exits for water — has played a major part in the escalation of the crisis.[Source: Indian Express http://indianexpress.com/article/explained/why-is-chennai-under-water/sthash.LlhnqM4B.dpuf] How much rain and where? To get started with our analysis, we connect to our GIS and bring in a map of the affected region. The map is a live widget that is internally using the ArcGIS JavaScript API that powers [ArcGIS.com](http://www.arcgis.com). ###Code import arcgis from arcgis.gis import GIS from IPython.display import display gis = GIS("https://www.arcgis.com", "arcgis_python", "P@ssword123") map = gis.map("Chennai", zoomlevel = 8) map ###Output _____no_output_____ ###Markdown We can search for content in our GIS and add layers to our map that can be used for visualization or analysis: ###Code chennaipop = gis.content.search("Chennai_Population", item_type="feature service", outside_org=True)[0] chennaipop map.add_layer(chennaipop) ###Output _____no_output_____ ###Markdown To get a sense of how much it rained and where, let's use rainfall data for December 2nd 2015, obtained from the Regional Meteorological Center in Chennai. The data is in chennai-rainfall.csv file, that we load into a Pandas data frame, and list its contents: ###Code import pandas as pd df = pd.read_csv('data/chennai-rainfall.csv') df.head() ###Output _____no_output_____ ###Markdown Tabular data is hard to visualize, so let's bring in a map from our GIS to visualize the data: ###Code map = gis.map("Tamil Nadu", zoomlevel=7) map ###Output _____no_output_____ ###Markdown We can import any pandas data frame, with an address field, as a layer in our GIS. We then add this layer to our map to see the locations of the weather stations from which the rainfall data was collected: ###Code # Create an arcgis.features.FeatureCollection object by importing the pandas dataframe with an address field rainfall = gis.content.import_data(df, {"Address" : "LOCATION"}) # The FeatureCollection can be added to the map using add_layer() method, just like regular portal items map.add_layer(rainfall, { "renderer":"ClassedSizeRenderer", "field_name":"RAINFALL" }) ###Output _____no_output_____ ###Markdown Here we used the **smart mapping** capability of the GIS to automatically render the data with proportional symbols. To learn more about smart mapping, visit the sample titled 'Smart Mapping' under the section '05 Power Users & Developers'. Spatial AnalysisRainfall is a continuous phenonmenon that affects the whole region, not just the locations of the weather stations. Based on the observed rainfall at the monitoring stations and their locations, we can interpolate and deduce the approximate rainfall across the whole region. We use the **Interpolate Points** tool from the GIS's spatial analysis service for this.The Interpolate Points tool uses empirical Bayesian kriging to perform the interpolation. ###Code from arcgis.features.analyze_patterns import interpolate_points interpolated_rf = interpolate_points(rainfall, field='RAINFALL') ###Output _____no_output_____ ###Markdown Let us create another map of Tamil Nadu state and render the output from Interpolate Points tool ###Code intmap = gis.map("Tamil Nadu", zoomlevel=7) intmap intmap.add_layer(interpolated_rf['result_layer']) ###Output _____no_output_____ ###Markdown We see that rainfall was most severe in and around Chennai as well some parts of central Tamil Nadu. What caused the flooding in Chennai? A wrong call that sank ChennaiMuch of the flooding and subsequent waterlogging was a consequence of the outflows from major reservoirs into swollen rivers and into the city following heavy rains. The release of waters from the Chembarambakkam reservoir in particular has received much attention. [Source: The Hindu, http://www.thehindu.com/news/cities/chennai/chennai-floods-a-wrong-call-that-sank-the-city/article7967371.ece] ###Code lakemap = gis.map("Chennai", zoomlevel=11) lakemap.height='450px' lakemap ###Output _____no_output_____ ###Markdown Let's have look at the major lakes and water reservoirs that were filled to the brim in Chennai due the rains. We plot the locations of some of the reservoirs that had a large outflow during the rains:To plot the locations, we use geocoding tools from the `tools` module. Your GIS can have more than 1 geocoding service, for simplicity, the sample below chooses the first available geocoder to perform an address search ###Code from arcgis.geocoding import geocode lakemap.draw(geocode("Chembarambakkam, Tamil Nadu")[0], {"title": "Chembarambakkam", "content": "Water reservoir"}) lakemap.draw(geocode("Puzhal Lake, Tamil Nadu")[0], {"title": "Puzhal", "content": "Water reservoir"}) lakemap.draw(geocode("Kannampettai, Tamil Nadu")[0], {"title": "Poondi Lake ", "content": "Water reservoir"}) ###Output _____no_output_____ ###Markdown To identify the flood prone areas, let's trace the path that the water would take when released from the lakes. To do this, we first bring in a layer of lakes in Chennai, and call the **`Trace Downstream`** analysis tool from the GIS: ###Code chennai_lakes = gis.content.search("Chennai Lakes", "feature collection", outside_org=True)[0] chennai_lakes ###Output _____no_output_____ ###Markdown The areas surrounding the trace paths are most prone to flooding and waterlogging. To identify the areas that were at risk, we buffer the traced flow paths by one mile in each direction and visualize it on the map. We see that large areas of the city of Chennai were susceptible to flooding and waterlogging. ###Code from arcgis.features.find_locations import trace_downstream from arcgis.features.use_proximity import create_buffers floodprone_buffer = create_buffers(trace_downstream(chennai_lakes), [ 1 ], units='Miles') lakemap.add_layer(floodprone_buffer) ###Output _____no_output_____ ###Markdown Nature's fury or human made disaster?"It is easy to attribute the devastation from unexpected flooding to the results of nature and climate change when in fact it is a result of poor planning and infrastructure. In Chennai, as in several cities across the country, we are experiencing the wanton destruction of our natural buffer zones—rivers, creeks, estuaries, marshlands, lakes—in the name of urban renewal and environmental conservation.The recent floods in Chennai are a fallout of real estate riding roughshod over the city’s waterbodies. Facilitated by an administration that tweaked and modified building rules and urban plans, the real estate boom has consumed the city’s lakes, ponds, tanks and large marshlands.The Ennore creek that used to be home to sprawling mangroves is fast disappearing with soil dredged from the sea being dumped there. The Kodungaiyur dump site in the Madhavaram–Manali wetlands is one of two municipal landfills that service the city. Velachery and Pallikaranai marshlands are a part of the Kovalam basin that was the southern-most of the four river basins for the city. Today, the slightest rains cause flooding and water stagnation in Velachery, home to the city’s largest mall, several other commercial and residential buildings, and also the site where low income communities were allocated land.The Pallikaranai marshlands, once a site for beautiful migratory birds, are now home to the second of the two landfills in the city where the garbage is rapidly leeching into the water and killing the delicate ecosystem."[Source: Chennai's Rain Check http://www.epw.in/commentary/chennais-rain-check.html]There are several marshlands and mangroves in the Chennai region that act as natural buffer zones to collect rain water. Let's see the human impact on Pallikaranai marshland over the last decade by comparing satellite images. ###Code ls_water = gis.content.search("Land Water Boundary (453) 1990-2010", max_items=1, outside_org = True)[0] ls_water ###Output _____no_output_____ ###Markdown Lets us see how the Pallikaranai marshland has changed over the past few decades, and how this has also contributed to the flooding. We create two maps and load the Land / Water Boundary layer to visualize this. This image layer is time enabled, and the map widget gives you the ability to navigate this dataset via time as well. ###Code satmap1 = gis.map("Pallikaranai, Tamil Nadu, India", zoomlevel=13) satmap1.add_layer(ls_water) satmap1.set_time_extent('1/1/1989 UTC', '1/1/1990 UTC') satmap2 = gis.map("Pallikaranai, Tamil Nadu, India", zoomlevel=13) satmap2.add_layer(ls_water) satmap2.set_time_extent('1/1/2009 UTC', '1/1/2010 UTC') from ipywidgets import * satmap1.layout=Layout(flex='1 1', padding='10px') satmap2.layout=Layout(flex='1 1', padding='10px') box = HBox([satmap1, satmap2]) box ###Output _____no_output_____ ###Markdown The human impact on the marshland is all too apparent in the satellite images. The marshland has shrunk to less than a third of its size in just two decades."Not long ago, it was a 50-square-kilometre water sprawl in the southern suburbs of Chennai. Now, it is 4.3 square kilometres – less than a tenth of its original. The growing finger of a garbage dump sticks out like a cancerous tumour in the northern part of the marshland. Two major roads cut through the waterbody with few pitifully small culverts that are not up to the job of transferring the rain water flows from such a large catchment. The edges have been eaten into by institutes like the National Institute of Ocean Technology. Ironically, NIOT is an accredited consultant to prepare Environmental Impact Assessments on various subjects, including on the implications of constructing on waterbodies.Other portions of this wetland have been sacrificed to accommodate the IT corridor. But water offers no exemption to elite industry. Unmindful of the lofty intellectuals at work in the glass and steel buildings of the software parks, rainwater goes by habit to occupy its old haunts, bringing the back-office work of American banks to a grinding halt."[Source: http://scroll.in/article/769928/chennai-floods-are-not-a-natural-disaster-theyve-been-created-by-unrestrained-construction] Flood Relief CampsTo provide emergency assistance, the Tamil Nadu government has set up several flood relief camps in the flood affected areas. They provide food, shelter and the basic necessities to thousands of people displaced by the floods. The locations of the flood relief camps was obtained from http://cleanchennai.com/floodrelief/2015/12/09/relief-centers-as-on-8-dec-2015/ and published to the GIS as a layer, that is visualized below: ###Code relief_centers = gis.content.search("Chennai Relief Centers", item_type="Feature Collection", outside_org=True)[0] reliefmap = gis.map("Chennai", zoomlevel=10) reliefmap reliefmap.add_layer(chennaipop) reliefmap.add_layer(relief_centers) ###Output _____no_output_____ ###Markdown Let us read the relief center layer as a pandas dataframe to analyze the data further ###Code relief_data = relief_centers.layers[0].query().sdf relief_data.head() relief_data['No_of_persons'].sum() relief_data['No_of_persons'].describe() %matplotlib inline import matplotlib.pyplot as pd relief_data['No_of_persons'].hist() ###Output _____no_output_____ ###Markdown In our dataset, each row represents a relief camp location. To quickly get the dimensions (rows & columns) of our data frame, we use the `shape` property ###Code relief_data.shape ###Output _____no_output_____ ###Markdown As of 8th December, 2015, there were 31,478 people in the 136 relief camps. Let's aggregate them by the district the camp is located in. To accomplish this, we use the `aggregate_points` tool. ###Code chennai_pop_featurelayer = chennaipop.layers[0] res = arcgis.features.summarize_data.aggregate_points(relief_centers, chennai_pop_featurelayer, False, ["No_of_persons Sum"]) aggr_lyr = res['aggregated_layer'] reliefmap.add_layer(aggr_lyr, { "renderer": "ClassedSizeRenderer", "field_name":"SUM_No_of_persons"}) ###Output _____no_output_____ ###Markdown Let us represent the aggreate result as a table: ###Code df = aggr_lyr.query().sdf df2 = df[['NAME', 'SUM_No_of_persons']] df2.set_index('NAME', inplace=True) df2 df2.plot(kind='bar') ###Output _____no_output_____ ###Markdown Routing Emergency Supplies to Relief Camps A centralized location has been established at Nehru Stadium to organise the relief materials collected from various organizations and volunteers. From there, the relief material is distributed to the needy flood affected people.The GIS provided routing tools that can help plan routes of the relief trucks from the center to relief camps: ###Code routemap = gis.map("Chennai", zoomlevel = 12) routemap nehru_stadium = geocode('Jawaharlal Nehru Stadium, Chennai')[0] routemap.draw(nehru_stadium, {"title": "Nehru Stadium", "content": "Chennai Flood Relief Center"}) import datetime start_time = datetime.datetime(2015, 12, 13, 9, 0) routes = arcgis.features.use_proximity.plan_routes(relief_centers, 15, 15, start_time, nehru_stadium, stop_service_time=30) routemap.add_layer(routes['routes_layer']) routemap.add_layer(routes['assigned_stops_layer']) ###Output _____no_output_____
temporal-difference/Temporal_Difference.ipynb
###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import numpy as np def get_action(env, Q, s, epsilon=0.1): random_act = np.random.choice(range(env.action_space.n)) best_act = np.argmax(Q[s]) sel_act = np.random.choice([best_act, random_act], p=[1-epsilon, epsilon]) return(sel_act) def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes epsilon = 0.1 for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = epsilon/10 s = env.reset() done = False while not done: a = get_action(env, Q, s, epsilon=epsilon) s_new, r, done, _ = env.step(a) a_new = get_action(env, Q, s_new, epsilon=0.1) Q[s][a] = Q[s][a] + alpha*(r + Q[s_new][a_new] - Q[s][a]) s = s_new return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes epsilon = 1 sum_r_episode = np.zeros(num_episodes) for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 1/i_episode s = env.reset() done = False while not done: a = get_action(env, Q, s, epsilon=epsilon) s_new, r, done, _ = env.step(a) if r == -100: done = True a_new = get_action(env, Q, s_new, epsilon=0.0) Q[s][a] = Q[s][a] + alpha*(r + Q[s_new][a_new] - Q[s][a]) s = s_new sum_r_episode[i_episode-1] += r ## TODO: complete the function return {'Q':Q, 'sum_r':sum_r_episode} ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax_ = q_learning(env, 5000, .01) Q_sarsamax = Q_sarsamax_['Q'] # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Sum of rewards Over Episode ###Code sum_r_avg10 = np.zeros(500) plt.plot([np.sum((Q_sarsamax_['sum_r'] * 1/10)[i*10:(i+1)*10]) for i in range(500)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes epsilon = 1 for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 1/i_episode s = env.reset() done = False while not done: a = get_action(env, Q, s, epsilon=epsilon) s_new, r, done, _ = env.step(a) a_best = get_action(env, Q, s_new, epsilon=0.0) probs = np.zeros(env.nA) probs[a_best] = 1-epsilon probs[list(set(range(env.nA)) - set([a_best]))] = epsilon/(env.nA-1) Q[s][a] = Q[s][a] + alpha*(r + np.dot(Q[s_new],probs) - Q[s][a]) s = s_new return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code # This function is bad... I don't know why... # def generate_next_action_by_epsilon_greedy(Q, s, n_actions, epsilon=0.1): # # probs = np.zeros(n_actions) # # max_a_index = np.argmax(Q[s]) # for a_idx in range(Q[s].size): # if a_idx == max_a_index: # probs[a_idx] = 1 - epsilon + epsilon / n_actions # else: # probs[a_idx] = epsilon / n_actions # # choose_action = np.random.choice(np.arange(n_actions), p=probs) # return choose_action def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if np.random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return np.random.choice(np.arange(env.action_space.n)) # def generate_next_state(env, a): # next_state, reward, done, info = env.step(a) # return next_state, reward, done # def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): # """Returns updated Q-value for the most recent experience.""" # current = Q[state][action] # estimate in Q-table (for current state, action pair) # # get value of state, action pair at next time step # Qsa_next = Q[next_state][next_action] if next_state is not None else 0 # target = reward + (gamma * Qsa_next) # construct TD target # new_value = current + (alpha * (target - current)) # get updated value # return new_value def update_Q_sarsa(Q, s, a, r, alpha, gamma, next_s=None, next_a=None): Qsa_next = Q[next_s][next_a] if next_s is not None else 0 Qsa_new = Q[s][a] + alpha * (r + gamma * Qsa_next - Q[s][a]) return Qsa_new # TODO: why is my algorithm so slow? why is average award decreasing? # compare it with the solution def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection # action = generate_next_action_by_epsilon_greedy(Q, state, nA, eps) while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action # next_action = generate_next_action_by_epsilon_greedy(Q, state, nA, eps) # Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ # state, action, reward, next_state, next_action) # TODO: find the problem code! Q[state][action] = update_Q_sarsa(Q, state, action, reward, alpha, gamma, next_state, next_action) state = next_state # S <- S' action = next_action # A <- A' if done: # Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ # state, action, reward) Q[state][action] = update_Q_sarsa(Q, state, action, reward, alpha, gamma) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q # # def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100, eps_start=1.0, eps_decay=.999, eps_min=0.05): # # initialize action-value function (empty dictionary of arrays) # # Q = defaultdict(lambda: np.zeros(env.nA)) # # n_actions = env.action_space.n # epsilon = eps_start # n_actions = env.action_space.n # number of actions # Q = defaultdict(lambda: np.zeros(n_actions)) # initialize empty dictionary of arrays # # # monitor performance # tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores # avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # # # loop over episodes # for i_episode in range(1, num_episodes+1): # # monitor progress # if i_episode % 100 == 0: # print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") # sys.stdout.flush() # # ## TODO: complete the function # # decaying epsilon # # epsilon = max(epsilon*eps_decay, eps_min) # epsilon = 1.0 / i_episode # # # Initialize score # score = 0 # # Observe state S_0 # state = env.reset() # # # # Choose action A_0 # # action = generate_next_action_by_epsilon_greedy(Q, state, n_actions, epsilon) # action = epsilon_greedy(Q, state, n_actions, epsilon) # # while True: # # Take action A_t and observe R_(t+1), S_(t+1) # # next_state, reward, done = generate_next_state(env, action) # next_state, reward, done, info = env.step(action) # # score += reward # # if not done: # # Choose action A_(t+1) using policy derived from Q # # next_action = generate_next_action_by_epsilon_greedy(Q, state, n_actions, epsilon) # next_action = epsilon_greedy(Q, state, n_actions, epsilon) # # # update Q table # # Q[state][action] = update_Q_sarsa(Q, state, action, reward, alpha, gamma, # # next_state, next_action) # Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state, next_action) # state = next_state # s' --> s # action = next_action # a' --> a # # if done: # # Q[state][action] = update_Q_sarsa(Q, state, action, reward, alpha, gamma) # Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward) # # tmp_scores.append(score) # append score # break # # if (i_episode % plot_every == 0): # avg_scores.append(np.mean(tmp_scores)) # # # # plot performance # plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) # plt.xlabel('Episode Number') # plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) # plt.show() # # print best 100-episode performance # print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) # # return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code np.random.seed(999) # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 2000000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 2000000/2000000Best Average Reward over 100 Episodes: -25.86 Estimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1): [[ 1 1 1 2 2 0 1 0 1 2 1 2] [ 3 2 0 1 0 1 0 3 0 3 0 1] [ 1 0 1 0 1 0 3 0 3 1 3 2] [ 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1]] ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if np.random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return np.random.choice(np.arange(env.action_space.n)) # TODO def update_Q_sarsamax(Q, s, a, r, alpha, gamma, next_s=None): max_Q_next = max(Q[next_s]) if next_s is not None else 0 Qsa_new = Q[s][a] + alpha * (r + gamma * max_Q_next - Q[s][a]) return Qsa_new def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) nA = env.action_space.n # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon # action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection done = False while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action next_state, reward, done, info = env.step(action) # take action A, observe R, S' Q[state][action] = update_Q_sarsamax(Q, state, action, reward, alpha, gamma, next_state) state = next_state # S <- S' score += reward # add reward to agent's score if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code np.random.seed(999) # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000Best Average Reward over 100 Episodes: -13.0 Estimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1): [[ 0 0 1 1 2 1 1 2 2 2 2 1] [ 1 1 3 1 3 1 1 3 1 2 1 2] [ 1 1 1 1 1 1 1 1 1 1 1 2] [ 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 0]] ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expeceted_sarsa(Q, s, a, r, alpha, gamma, eps, nA, next_s=None): policy = np.zeros(nA); a_max = np.argmax(Q[next_s]) policy = np.repeat(eps/nA, nA) policy[a_max] += 1 - eps expected_Q_next = np.sum( Q[next_s] * policy) Qsa_new = Q[s][a] + alpha * (r + gamma * expected_Q_next - Q[s][a]) return Qsa_new def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) nA = env.action_space.n # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 # initialize score state = env.reset() # start episode eps = 0.1 # set value of epsilon # action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: # next_state, reward, done, info = env.step(action) # take action A, observe R, S' action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action next_state, reward, done, info = env.step(action) # take action A, observe R, S' Q[state][action] = update_Q_expeceted_sarsa(Q, state, action, reward, alpha, gamma, eps, nA, next_state) state = next_state # S <- S' score += reward # add reward to agent's score if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code np.random.seed(999) # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 50000, .5) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code !pip install seaborn import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) new_value = current + (alpha * (target - current)) # get updated value return new_value def epsilon_greedy(Q, state, eps): if random.random() > eps: return np.argmax(Q[state]) # greedy choice else: return random.choice(np.arange(env.action_space.n)) #random def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() # starting episode eps = 1.0 / i_episode action = epsilon_greedy(Q, state, eps) while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, eps) Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state # S <- S' action = next_action # A <- A' else: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code import random # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): current = Q[state][action] # estimate in Q-table (for current state, action pair) Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 # value of next state target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every = 100): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon while True: action = epsilon_greedy(Q, state, eps) next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): current = Q[state][action] # estimate in Q-table (for current state, action pair) policy_s = np.ones(nA) * eps / nA # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step target = reward + (gamma * Qsa_next) # construct target new_value = current + (alpha * (target - current)) # get updated value return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode #eps = 1.0 / i_episode # set value of epsilon eps = 0.005 # set value of epsilon while True: action = epsilon_greedy(Q, state, eps) next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import random import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code # added by saeid def eps_greedy(eps,Q,state): # print (eps) # print (state) if random.uniform(0, 1) < eps: action = random.randint(0,3) else: action = np.argmax(Q[state]) # MY BIG Mistake was using max instead of argmax return action def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes # print ('Hi') for i_episode in range(1,num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function #eps = -0.9/(num_episodes-1)*(i_episode)+ 0.95 eps = 1.0 / (i_episode) # set value of epsilon state = env.reset() action = eps_greedy(eps,Q,state) while True: # print (action) next_state, reward, done, info = env.step(action) #print (next_state) #print (done) old_Q= Q[state][action] if not done: next_action = eps_greedy(eps,Q,next_state) #print (action) Q[state][action] = (1- alpha)*old_Q + alpha*(reward + gamma*Q[next_state][next_action]) else: Q[state][action] = (1- alpha)*old_Q + alpha*(reward) break state = next_state action = next_action return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = -0.9/num_episodes*(i_episode -1)+ 0.95 state = env.reset() while True: # print (action) action = eps_greedy(eps,Q,state) next_state, reward, done, info = env.step(action) #print (next_state) #print (d #print (action) Q[state][action] = (1- alpha)*Q[state][action] + alpha*(reward + gamma*np.max(Q[next_state])) if done: # print ('in done') break state = next_state return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa_q_update(env, Q, state, a_t, alpha, r_t_1, gamma, epsilon, s_t_1=None): # this could be np.dot, but considering how small the vectors are # it makes more sense to use sum and comprehension expected_q_value = sum(p * Q[s_t_1][a] for a, p in enumerate(get_probabilities(Q[s_t_1], epsilon))) \ if s_t_1 is not None else 0 return Q[state][a_t] + alpha * (r_t_1 + gamma * expected_q_value - Q[state][a_t]) def expected_sarsa(env, num_episodes, alpha, gamma=1.0, epsilon=1.0, epsilon_decay=.01, epsilon_min=0.0001, plot_every=100): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # the monitoring code is taken from the solution tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon_i = max(epsilon * np.exp(-i_episode * epsilon_decay), epsilon_min) state = env.reset() a_t = epsilon_greedy(env, state, Q, epsilon_i) score=0 while True: s_t_1, r_t_1, done, _ = env.step(a_t) score += r_t_1 if done: Q[state][a_t] = expected_sarsa_q_update(env, Q, state, a_t, alpha, r_t_1, gamma, epsilon) tmp_scores.append(score) break else: a_t_1 = epsilon_greedy(env, s_t_1, Q, epsilon_i) Q[state][a_t] = expected_sarsa_q_update(env, Q, state, a_t, alpha, r_t_1, gamma, epsilon, s_t_1) state = s_t_1 a_t = a_t_1 if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function # in this case we set epsilon to a small value from the start # we observe convergence very quickly too Q_expsarsa = expected_sarsa(env, 1000, 1, epsilon=0.005) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 1000/1000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output _____no_output_____ ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function. ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_probs(Qs, nA, i_episode, eps=None): """obtains action probabilites corresponding to epsilon-greedy policy""" epsilon = 1.0 / i_episode if eps is not None: epsilon = eps policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Qs) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s def update_Q(Qsa, Qsa_next, reward, alpha, gamma): return Qsa + (alpha * (reward + (gamma * Qsa_next) - Qsa)) def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # observe S0 state = env.reset() # get epsilon-greedy action probabilities policy_s = get_probs(Q[state], env.nA, i_episode) # pick action A0 action = np.random.choice(np.arange(env.nA), p=policy_s) # limit the number of time steps per episode for t in range(300): # Take action At and observe Rt+1,St+1 next_state, reward, done, info = env.step(action) # add reward to the score score += reward if not done: # Choose action At+1 using policy derived from Q (e.g., epsilon-greedy) policy_s = get_probs(Q[next_state], env.nA, i_episode) next_action = np.random.choice(np.arange(env.nA), p=policy_s) #update TD estimate of Q Q[state][action] = update_Q(Q[state][action], Q[next_state][next_action], reward, alpha, gamma) # update for next iteration state = next_state action = next_action else: # update TD estimate of Q Q[state][action] = update_Q(Q[state][action], 0, reward, alpha, gamma) # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() while True: policy = get_probs(Q[state], env.nA, i_episode) action = np.random.choice(np.arange(env.nA), p=policy) next_state, reward, done, info = env.step(action) score += reward Q[state][action] = update_Q(Q[state][action], np.max(Q[next_state]), reward, alpha, gamma) state = next_state if done: tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() policy_s = get_probs(Q[state], env.nA, i_episode, 0.005) while True: action = np.random.choice(np.arange(env.nA), p=policy_s) next_state, reward, done, info = env.step(action) score += reward # get epsilon-greedy action probabilities (for S') policy_s = get_probs(Q[next_state], env.nA, i_episode, 0.005) # update Q Q[state][action] = update_Q(Q[state][action], np.dot(Q[next_state], policy_s), \ reward, alpha, gamma) state = next_state if done: # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Optional ExerciseAs an optional exercise to deepen your understanding, you are encouraged to reproduce Figure 6.4. ![](https://video.udacity-data.com/topher/2018/May/5ae93d8e_screen-shot-2017-12-17-at-12.49.34-pm/screen-shot-2017-12-17-at-12.49.34-pm.png)The figure shows the performance of Sarsa and Q-learning on the cliff walking environment for constant ϵ=0.1. As described in the textbook, in this case,- Q-learning achieves worse online performance (where the agent collects less reward on average in each episode), but learns -the optimal policy, and- Sarsa achieves better online performance, but learns a sub-optimal "safe" policy.You should be able to reproduce the figure by making only small modifications to your existing code. ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 10 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # begin an episode, observe S state = env.reset() while True: # get epsilon-greedy action probabilities policy_s = get_probs(Q[state], env.nA, i_episode, eps=0.1) # pick next action A action = np.random.choice(np.arange(env.nA), p=policy_s) # take action A, observe R, S' next_state, reward, done, info = env.step(action) # add reward to score score += reward # update Q Q[state][action] = update_Q(Q[state][action], np.max(Q[next_state]), \ reward, alpha, gamma) # S <- S' state = next_state # until S is terminal if done: # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) return Q, scores def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 10 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() policy_s = get_probs(Q[state], env.nA, i_episode, 0.1) while True: action = np.random.choice(np.arange(env.nA), p=policy_s) next_state, reward, done, info = env.step(action) score += reward # get epsilon-greedy action probabilities (for S') policy_s = get_probs(Q[next_state], env.nA, i_episode, 0.1) # update Q Q[state][action] = update_Q(Q[state][action], np.dot(Q[next_state], policy_s), \ reward, alpha, gamma) state = next_state if done: # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) return Q, scores num_episodes = 500 _, scores_q = q_learning(env, num_episodes, 0.1) _, scores_ex = expected_sarsa(env, num_episodes, 0.1) # plot performance plt.plot(scores_q) plt.plot(scores_ex) plt.show() ###Output Episode 500/500 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import random import pprint import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) INITIAL_STATE = 36 TERMINAL_STATE = 47 ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code # TODO Delete! eps, cnt, total = 0.35, 0, 25000000 for i in range(total): r = random.random() # print(r) if r < eps: cnt += 1 print('\n', cnt/total) # TODO Delete! nA = 4 for i in range(10): print(random.randint(0, nA-1)) def monitor_progress(i_episode, num_episodes, no_notifications=10): if i_episode % (num_episodes/no_notifications) == 0: print("\rEpisode {}/{}.\n".format(i_episode, num_episodes), end="") sys.stdout.flush() def decayed_epsilon(i, num_episodes, start=1.0, end=0.01, fixed_after=0.9): epsilon_fixed_after_episodes = fixed_after*num_episodes if i < epsilon_fixed_after_episodes: epsilon_i = start - i*(start - end)/epsilon_fixed_after_episodes else: epsilon_i = end return epsilon_i def eps_greedy_action(Q, state, nA, eps): """ Return int depicting the action chosen according to epsilon-greedy policy, i.e. with probability eps a random action among ALL possible actions is chosen, and with probability 1-eps the greedy action is chosen. """ random_number = random.random() # between 0.0 and 1.0 if random_number <= eps: # select a random action w probability eps return random.randint(0, nA-1) else: # select greedy action otherwise return np.argmax(Q[state]) def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """ Q(St, At) ← Q(St, At) + α(Rt+1 + γQ(St+1, At+1) − Q(St, At)) """ current_Q_estimate = Q[state][action] if next_state is None: Q_next = 0 else: Q_next = Q[next_state][next_action] alternative_Q_estimate = reward + (gamma*Q_next) updated_Q = current_Q_estimate + (alpha * (alternative_Q_estimate - current_Q_estimate)) return updated_Q def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) nA = env.nA # number of actions Q = defaultdict(lambda: np.zeros(nA)) # print('nA =', nA) # loop over episodes for i_episode in range(1, num_episodes+1): monitor_progress(i_episode, num_episodes) state = env.reset() #print('state =', state) #env.render() # eps = decayed_epsilon(i_episode, num_episodes) eps = 1.0 / i_episode # Choose action A0 using policy derived from Q (e.g., ε-greedy) action = eps_greedy_action(Q, state, nA, eps) #print('action =', action) while True: # Take action A_t and observe R_t+1 , S_t+1 S_t1, R_t1, done, info = env.step(action) #env.render() #print('R_t1 =', R_t1) if done: # Q-Update #print('done') #print('R_t1 = {}, type = {}'.format(R_t1, type(R_t1))) Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, R_t1) break else: next_action = eps_greedy_action(Q, S_t1, nA, eps) # Q-Update: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, R_t1, S_t1, next_action) state = S_t1 action = next_action #pprint.pprint(Q) return Q # Q_sarsa = sarsa(env, 1000, .01) ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 50000, .005) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/50000. Episode 10000/50000. Episode 15000/50000. Episode 20000/50000. Episode 25000/50000. Episode 30000/50000. Episode 35000/50000. Episode 40000/50000. Episode 45000/50000. Episode 50000/50000. ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): """ Q(St, At) ← Q(St, At) + α(Rt+1 + γ maxa Q(St+1, a) − Q(St, At)) """ current_Q_estimate = Q[state][action] if next_state is None: Q_next = 0 else: Q_next = np.max(Q[next_state]) alternative_Q_estimate = reward + (gamma*Q_next) updated_Q = current_Q_estimate + (alpha * (alternative_Q_estimate - current_Q_estimate)) return updated_Q def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays nA = env.nA # number of actions Q = defaultdict(lambda: np.zeros(nA)) # loop over episodes for i_episode in range(1, num_episodes+1): monitor_progress(i_episode, num_episodes) state = env.reset() eps = 1.0 / i_episode while True: # Choose action At using policy derived from Q (e.g., ε-greedy) # Question: Cheatsheet says e.g. eps-greedy?! I thought greedy here!? A_t = eps_greedy_action(Q, state, nA, eps) # Take action A_t and observe R_t+1 , S_t+1 S_t1, R_t1, done, _ = env.step(A_t) Q[state][A_t] = update_Q_sarsamax(alpha, gamma, Q, state, A_t, R_t1, S_t1) state = S_t1 if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 500/5000. Episode 1000/5000. Episode 1500/5000. Episode 2000/5000. Episode 2500/5000. Episode 3000/5000. Episode 3500/5000. Episode 4000/5000. Episode 4500/5000. Episode 5000/5000. ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output C:\Users\n0s011m\AppData\Local\Continuum\anaconda3\envs\tfdl\lib\site-packages\matplotlib\cbook\__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 # TD Target = reward + TD error target = reward + (gamma * Qsa_next) # get updated value new_value = current + (alpha * (target - current)) return new_value def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0,plot_every=100): """ SARSA Algo """ # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: # take action A, observe R, S' next_state, reward, done, info = env.step(action) score += reward # add reward to agent's score if not done:# Non terminal next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action # Q value change after 1 step Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state, next_action) # Updates state = next_state # S <- S' action = next_action # A <- A' if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q,state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 # value of next state target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Q-Learning - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): learning rate gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) policy_s = np.ones(nA) * eps / nA # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step target = reward + (gamma * Qsa_next) # construct target new_value = current + (alpha * (target - current)) # get updated value return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize empty dictionary of arrays nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 0.005 # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score # update Q Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(state, Q, epsilon, nA): a_max = np.argmax(Q[state]) probabilities = np.zeros(nA) probabilities[a_max] = 1 - epsilon probabilities = probabilities + epsilon / nA return probabilities def get_action(probabilities): action = np.random.choice(env.action_space.n, 1, p=probabilities)[0] return action def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes epsilon_init = 1.0 for i_episode in range(1, num_episodes + 1): # monitor progress epsilon = epsilon_init / i_episode if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() probabilities = epsilon_greedy(state, Q, epsilon, env.action_space.n) action = get_action(probabilities) while True: next_state, reward, done, info = env.step(action) probabilities = epsilon_greedy(next_state, Q, epsilon, env.action_space.n) next_action = get_action(probabilities) Q[state][action] = Q[state][action] + alpha * ( reward + gamma * Q[next_state][next_action] - Q[state][action]) state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(state, Q, epsilon, nA): a_max = np.argmax(Q[state]) probabilities = np.zeros(nA) probabilities[a_max] = 1 - epsilon probabilities = probabilities + epsilon / nA return probabilities def get_action(probabilities): action = np.random.choice(env.action_space.n, 1, p=probabilities)[0] return action def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes epsilon_init = 1.0 for i_episode in range(1, num_episodes + 1): # monitor progress epsilon = epsilon_init = 1.0 / i_episode if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() while True: probabilities = epsilon_greedy(state, Q, epsilon, env.action_space.n) action = get_action(probabilities) next_state, reward, done, info = env.step(action) Qmax = np.max(Q[next_state]) Q[state][action] = Q[state][action] + alpha * ( reward + gamma * Qmax - Q[state][action]) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(state, Q, epsilon, nA): a_max = np.argmax(Q[state]) probabilities = np.zeros(nA) probabilities[a_max] = 1 - epsilon probabilities = probabilities + epsilon / nA return probabilities def get_action(probabilities): action = np.random.choice(env.action_space.n, 1, p=probabilities)[0] return action def get_expected_Q(state, Q, epsilon, nA): probabilities = epsilon_greedy(state, Q, epsilon, nA) expected_Q = np.sum(probabilities * Q[state]) return expected_Q def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes epsilon_init = 1.0 for i_episode in range(1, num_episodes +1): # monitor progress epsilon = epsilon_init / i_episode if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() probabilities = epsilon_greedy(state, Q, epsilon, env.action_space.n) action = get_action(probabilities) while True: next_state, reward, done, info = env.step(action) probabilities = epsilon_greedy(next_state, Q, epsilon, env.action_space.n) next_action = get_action(probabilities) Q_expected = get_expected_Q(next_state, Q, epsilon, env.action_space.n) Q[state][action] = Q[state][action] + alpha * ( reward + gamma * Q_expected - Q[state][action]) state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 0.01) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import random import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(Q, state, nA, eps ): if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def update_QTable_using_SARSA(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): current_value = Q[state][action] if next_state is not None: next_state_action_reward = Q[next_state][next_action] else: next_state_action_reward = 0 target = reward + gamma * (next_state_action_reward) new_reward = current_value + alpha * (target - current_value) return new_reward def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(env.nA)) # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() eps = 1.0/ i_episode action = epsilon_greedy(Q, state, nA,eps ) while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward if not done: next_action = epsilon_greedy(Q, next_state, nA,eps ) Q[state][action] = update_QTable_using_SARSA(alpha, gamma, Q, state, action,\ reward, next_state, next_action) state = next_state action = next_action else: Q[state][action] = update_QTable_using_SARSA(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) # append score break # print("\n Dumping the tmp_scores . length is {0} and values are \n {1}".format(len(tmp_scores), tmp_scores)) if (i_episode % plot_every == 0): # print("Avg tmp_scores and append to avg_score") # print("Before: {0}".format(avg_scores)) avg_scores.append(np.mean(tmp_scores)) # print("After: {0}".format(avg_scores)) # print("\n Dumping the avg_scores . length is {0} and values are \n {1}".format(len(avg_scores), avg_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) print("\n The QTable is : \n") for key,value in Q.items(): print("Key :{0} and value: {1}".format(key,value)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) policy_sarsa = ([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]) print("policy sarsa : \n {0}".format(policy_sarsa)) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) print("V sarsa : \n {0}".format(V_sarsa)) ###Output policy sarsa : [3, 1, 1, 1, 2, 3, 1, 2, 1, 2, 1, 2, 0, 1, 2, 1, 1, 1, 1, 1, 2, 0, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 0, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] V sarsa : [-11.123970245197876, -10.588526403726842, -9.913305125582733, -9.176824068084043, -8.40317694193676, -7.609686117055404, -6.805720590942865, -5.9945258354661695, -5.183447233645918, -4.379309228609241, -3.594824611777049, -2.867873769105195, -11.616196643635957, -10.835199466128572, -10.030777095566286, -9.189776328551623, -8.335022824710144, -7.467262755403748, -6.588412985745061, -5.697801347145464, -4.796472471769716, -3.8819006208432523, -2.9509509200599204, -1.9983734279726366, -12.043705405185234, -11.037302826673962, -10.018026526741833, -9.004664060250517, -8.008365726195965, -7.01139010415057, -6.010305733560852, -5.004645221781231, -4.0000080137190155, -3.0000028283417506, -2.0000005040552966, -0.9999999999999944, -13.040079533007008, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy_q_learning(Q, state, nA, eps): if random.random() > eps: return np.argmax(Q[state]) else: return random.choice(np.arange(nA)) def update_Q_QLearning(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): current_state = Q[state][action] if next_state is not None: next_reward = np.max(Q[next_state]) else: next_reward = 0 target = reward + gamma * next_reward new_reward_curr_state = current_state + alpha * (target - current_state) return new_reward_curr_state def q_learning(env, num_episodes, alpha, gamma=1.0,plot_every=100): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) #quques to track tmp and avg scores tmp_scores = deque(maxlen = plot_every) avg_scores = deque(maxlen = num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() eps = 1.0 / i_episode while True: action = epsilon_greedy_q_learning(Q, state, nA, eps) next_state, reward, done, info = env.step(action) score += reward Q[state][action] = update_Q_QLearning(alpha, gamma, Q, state, \ action, reward,next_state) state = next_state if done: tmp_scores.append(score) break if i_episode % plot_every == 0: avg_scores.append(np.mean(tmp_scores)) #Plot Performance plt.plot(np.linspace(0,num_episodes, len(avg_scores), endpoint=False), np.asarray(avg_scores)) plt.xlabel("Number of Episodes") plt.ylabel("Avg Reward every {0} episode".format(plot_every)) plt.show() return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expected_sarsa(alpha, gamma, Q, nA, eps, state,action, reward, next_state=None): print("The state is : {0}".format(state)) print("The action is : {0}".format(action)) current = Q[state][action] policy_s = get_probs(Q_s, eps, nA) Qsa_next = np.dot(Q[next_state] , policy_s) target = reward + gamma * Qsa_next new_value = current + (alpha * (target - current)) return new_value def get_probs(Q, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA greedy_action = np.argmax(Q) policy_s[greedy_action] = 1 - epsilon + (epsilon / nA) return policy_s def expected_sarsa(env, num_episodes, alpha, gamma=1.0,plot_every=100): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) tmp_scores = deque(maxlen=plot_every) avg_scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() eps = 0.05 state = env.reset() score = 0 while True: action = epsilon_greedy(Q, state, env.nA, eps ) next_state, reward, done, info = env.step(action) score += reward Q[state][action] = update_Q_expected_sarsa(alpha, gamma, Q, nA, eps, state,action, reward, next_state) state = next_state if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa_2(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code %cd deep-reinforcement-learning/temporal-difference import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output /content/deep-reinforcement-learning/temporal-difference ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_policy(env, Q, s, epsilon): if np.random.uniform() < epsilon: return env.action_space.sample() else: return np.argmax(Q[s]) def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes epsilon = 0.5 for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") if epsilon > 0.1: epsilon -= 0.1 sys.stdout.flush() ## Done: complete the function # epsilon policy s = env.reset() a = epsilon_policy(env, Q, s, epsilon) done = False while done == False: s_next, r, done, info = env.step(a) a_next = epsilon_policy(env, Q, s_next, epsilon) # SARSA Q[s][a] = Q[s][a] + alpha * (r + gamma * Q[s_next][a_next] - Q[s][a]) s, a = s_next, a_next return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes epsilon = 0.5 for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") if epsilon > 0.1: epsilon -= 0.1 sys.stdout.flush() ## Done: complete the function s = env.reset() done = False while done == False: a = epsilon_policy(env, Q, s, epsilon) s_next, r, done, info = env.step(a) a_next_best = np.argmax(Q[s_next]) Q[s][a] = Q[s][a] + alpha * (r + gamma * Q[s_next][a_next_best] - Q[s][a]) s = s_next return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_Q(Q, s, epsilon): return np.sum(Q[s]) * epsilon / len(Q[s]) + np.max(Q[s]) * (1 - epsilon) def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes epsilon = 0.5 for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") if epsilon > 0.1: epsilon -= 0.1 sys.stdout.flush() ## Done: complete the function s = env.reset() done = False while done == False: a = epsilon_policy(env, Q, s, epsilon) s_next, r, done, info = env.step(a) Q[s][a] = Q[s][a] + alpha * (r + gamma * expected_Q(Q, s_next, epsilon) - Q[s][a]) s = s_next return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 0.01) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random import math from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /Users/ashdasstooie/anaconda3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /Users/ashdasstooie/anaconda3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /Users/ashdasstooie/anaconda3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /Users/ashdasstooie/anaconda3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## AshD - complete the function score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state # S <- S' action = next_action # A <- A' if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## AshD - complete the function score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, Q, state, action, reward, policy_s, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = np.dot(Q[next_state], policy_s,) target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def get_probs(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + epsilon / nA return policy_s def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## AshD - complete the function score = 0 # initialize score state = env.reset() # start episode eps = 0.005 #1.0 / i_episode # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score # design policy rule policy_s = get_probs(Q[next_state], eps, nA) # update Q value Q[state][action] = update_Q_expsarsa(alpha, gamma, Q, \ state, action, reward, policy_s, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function. ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_EpslionGreedy_probs(Q_s,i_episode, nA, epsilon=None): """ obtains the action probabilities corresponding to epsilon-greedy policy. picking a strategy: exploration or exploitation """ if epsilon is None: epsilon=1.0/i_episode policy_s = np.ones(nA) * epsilon / nA #All actions are equally probable, size of action space best_a = np.argmax(Q_s) #best action with best estimate policy_s[best_a] = 1 - epsilon + (epsilon / nA) #assigning the new prob of the best return policy_s #array of size of action space def sarsa(env, num_episodes, alpha,gamma=1.0, epsilon=1.0 ,min_epsilon=0.01, epsilon_decay=0.8): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) #extra varabiles for performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() #performace vars score = 0 # initalize with the begin state state = env.reset() #s0 #exploit more,explore less (ie epislon decay) epsilon = max(epsilon_decay* epsilon,min_epsilon) #get prob of each action with the epsilon greedy approach prob=get_EpslionGreedy_probs(Q[state],i_episode, env.nA, epsilon) #choose action based on the epislon prob. action = np.random.choice(np.arange(env.nA), p=prob) #a0 #you already have s0, a0 #use epsilon greedy approach to get next state and action for t_step in range(0,300): # Interacte with env to get r1,s1 next_state, reward, done, info = env.step(action) #s1,r1 # Now you have s0,a0,s1,r1, missing is a1 #performance score+=reward if not done: #if more steps to do , get a1 #get a1 for the new state (s1) using epsilon prob approach prob=get_EpslionGreedy_probs(Q[next_state],i_episode, env.nA, epsilon) next_action = np.random.choice(np.arange(env.nA), p=prob ) #a1 # Now you have s0,a0,r1,s1,a1 #update Q[s0][a0] Q[state][action] = Q[state][action] + alpha*( reward + gamma*(Q[next_state][next_action]) - Q[state][action]) state = next_state action = next_action else : #finished, no more actions to make #Q[state][action] = Q[state][action] + alpha*( reward + gamma*(0) - Q[state][action]) # simplified Q[state][action] = Q[state][action] + alpha*( reward - Q[state][action]) # performance : append s tmp_scores.append(score) break; if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01, epsilon=0.1) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0, epsilon=1.0 ,min_epsilon=0.01, epsilon_decay=0.85): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) #extra varabiles for performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() #performace vars score = 0 # initalize with the begin state state = env.reset() #s0 #exploit more,explore less (ie epislon decay) epsilon = max(epsilon_decay* epsilon,min_epsilon) #use epsilon greedy approach to get next state and action #for t_step in range(0,300): #removed, why ? while True: #you already have s0 #for a0, get prob of each poss. action with the epsilon greedy approach prob=get_EpslionGreedy_probs(Q[state],i_episode, env.nA, epsilon) #choose action based on the epislon prob. action = np.random.choice(np.arange(env.nA), p=prob) #a0 # Interacte with env to get r1,s1 next_state, reward, done, info = env.step(action) #s1,r1 next_action = np.argmax(Q[next_state]) #a1 QL learning approach # Now you have s0,a0,s1,r1,a1 #performance score+=reward #update Q[s0][a0] Q[state][action] = Q[state][action] + alpha*( reward + gamma*(Q[next_state][next_action]) - Q[state][action]) state = next_state #action = next_action #removed, as we select a0 in epsilon and a1 in argmax if done : #finished # performance : append s tmp_scores.append(score) break; if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0, epsilon=0.005 ,min_epsilon=0.001, epsilon_decay=0.85): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) #extra varabiles for performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() #performace vars score = 0 # initalize with the begin state state = env.reset() #s0 #exploit more,explore less (ie epislon decay) epsilon = max(epsilon_decay* epsilon,min_epsilon) #for a0, get prob of each poss. action with the epsilon greedy approach prob=get_EpslionGreedy_probs(Q[state],i_episode, env.nA, epsilon) #use epsilon greedy approach to get next state and action while True: #you already have s0 #choose action based on the epislon prob. action = np.random.choice(np.arange(env.nA), p=prob) #a0 # Interacte with env to get r1,s1 next_state, reward, done, info = env.step(action) #s1,r1 #estimate alterntive in Expected SARSA prob=get_EpslionGreedy_probs(Q[next_state],i_episode, env.nA, epsilon) weighted_estimate = np.dot(Q[next_state],prob) # Now you have s0,a0,s1,r1,a1 #performance score+=reward #update Q[s0][a0] Q[state][action] = Q[state][action] + alpha*( reward + gamma*(weighted_estimate) - Q[state][action]) state = next_state #action = next_action #removed, as we select a0 in epsilon and a1 in argmax if done : #finished # performance : append s tmp_scores.append(score) break; if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /media/mayur/4849a1fc-787d-4ad7-83d7-cf186eda025b/mayur/anaconda3/envs/drlnd/lib/python3.6/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def eps_greedy(Q,eps,state,nA): """Select an action epsilon greedily""" if (np.random.random() > eps): return np.argmax(Q[state]) else: return np.random.choice(range(nA)) def update_Q_sarsa(Q,eps,state,gamma,alpha,action,reward,next_state=None,next_action=None): """ Update the Q function using SARSA (TD(0)) """ Q_current = Q[state][action] Q_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + gamma*Q_next # construct TD target Q_update = Q_current + alpha*(target - Q_current) # Get updated value for Q[state][action] return Q_update def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1, plot_every=100): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) eps = eps_start nA = env.action_space.n # initialize performance monitor tmp_scores = deque(maxlen = plot_every) avg_scores = deque(maxlen = num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() eps = eps/i_episode score = 0 # initialize score state = env.reset() action = eps_greedy(Q,eps,state,nA) while True: next_state, reward, done, info = env.step(action) score += reward # add reward to agent's score if not done: next_action = eps_greedy(Q,eps,next_state,nA) Q[state][action]=update_Q_sarsa(Q,eps,state,gamma,alpha,action,reward,next_state,next_action) #print(Q[state][action],Q[next_state][next_action]) state = next_state action = next_action else: Q[state][action]=update_Q_sarsa(Q,eps,state,gamma,alpha,action,reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def eps_greedy(Q, eps, state,nA): """Select an action greedily""" if np.random.random()>eps: return np.argmax(Q[state]) else: return np.random.choice(range(nA)) def update_Q(state,action,reward,next_state,eps,alpha, gamma,Q): Q_current = Q[state][action] Q_next = np.max(Q[next_state]) update_value = alpha*(reward + gamma*Q_next - Q_current) Q_update = Q_current + update_value return Q_update def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start = 1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) eps = eps_start # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() eps = max(eps/i_episode,0.05) state = env.reset() while True: action = eps_greedy(Q,eps,state,env.nA) next_state, reward, done, info = env.step(action) Q[state][action] = update_Q(state,action,reward,next_state,eps,alpha, gamma,Q) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def eps_greedy(Q,state,eps,nA): if np.random.random()>eps: return np.argmax(Q[state]) else: return np.random.choice(range(nA)) def Q_avg(eps,Q,nA,next_state): policy_s = np.ones(nA) * eps / nA # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action Q_next = np.dot(Q[next_state], policy_s) # get value of state at next time step return Q_next def update_Q(state,action,next_state,reward,eps,alpha,gamma,Q,nA): Q_current = Q[state][action] Q_next = Q_avg(eps,Q,nA,next_state) target = reward + gamma*Q_next # construct TD target Q_update = Q_current + alpha*(target-Q_current) return Q_update def expected_sarsa(env, num_episodes, alpha, gamma=1.0,eps_start=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) eps = eps_start # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() eps=0.005 state = env.reset() while True: action = eps_greedy(Q,state,eps,env.nA) next_state, reward, done, info = env.step(action) Q[state][action]=update_Q(state,action,next_state,reward,eps,alpha,gamma,Q,env.nA) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import random import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output /home/pemfir/anaconda3/envs/deep_rl/lib/python3.7/site-packages/gym/envs/registration.py:14: PkgResourcesDeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately. result = entry_point.load(False) ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below.notice that next_state, reward, done, info = env.step(action). Here done = False as long as we have not reached state 47 in which case the episode ends. unless you are on a cliff, if you make a move forcing you outside of the grid, e.g., moving to the laft or up when in state 0, then you will not move and remain in state 0. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code action_map = {0 : 'UP' , 1 : 'RIGHT' , 2 : "DOWN", 3 : "LEFT"} state = env.reset() print("initial state", state) while True: action = random.randrange(4) next_state, reward, done, info = env.step(action) print("action", action_map[action]) print(next_state, reward, done, info) if done: break def generate_action(state, Q, epsilon, number_of_action): if np.random.random() <= epsilon: action = random.randrange(number_of_action) else: action = np.argmax(Q[state]) return action def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99, eps_min=0.05, plot_every=100): # initialize action-value function (empty dictionary of arrays) number_of_action = env.nA epsilon = eps_start Q = defaultdict(lambda: np.zeros(number_of_action)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress state = env.reset() action = generate_action(state, Q, epsilon, number_of_action) score = 0 while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = generate_action(next_state, Q, epsilon, number_of_action) # keep track of rewards in the episode # epsilon = max(epsilon*eps_decay, eps_min) epsilon = 1.0 / i_episode Qsa_next = Q[next_state][next_action] Q[state][action] += alpha*(reward + gamma*Qsa_next - Q[state][action]) state = next_state action = next_action if done: # this means you reached the end state. The last action you did, took you to the end state 47 # state = state before the end state # next_state = 47 # action that got you to 47 # There is no next state and Q function of the reward state is same as the reward you recieve in that state Qsa_next = 0 Q[state][action] += alpha*(reward + gamma*Qsa_next - Q[state][action]) # adding the total accumulated reward as 1 element of a list to the tmp_scores tmp_scores.append(score) break if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code Q_sarsamax[1] np.argmax(Q_sarsamax[1]) def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99, eps_min=0.05, plot_every=100): # initialize action-value function (empty dictionary of arrays) number_of_action = env.nA epsilon = eps_start Q = defaultdict(lambda: np.zeros(number_of_action)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress state = env.reset() action = generate_action(state, Q, epsilon, number_of_action) score = 0 while True: next_state, reward, done, info = env.step(action) score += reward if not done: # notice there is no next_action. actions in this method are all based on epsilon greedy epsilon = 1.0 / i_episode Qsa_next = np.max(Q[next_state]) Q[state][action] += alpha*(reward + gamma*Qsa_next - Q[state][action]) # action for next round. action = generate_action(next_state, Q, epsilon, number_of_action) state = next_state # action = next_action if done: # this means you reached the end state. The last action you did, took you to the end state 47 # state = state before the end state # next_state = 47 # action that got you to 47 # There is no next state and Q function of the reward state is same as the reward you recieve in that state Qsa_next = 0 Q[state][action] += alpha*(reward + gamma*Qsa_next - Q[state][action]) # adding the total accumulated reward as 1 element of a list to the tmp_scores tmp_scores.append(score) break if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def compute_action_probabilities(q_values, epsilon, number_of_action): probabilities = np.ones(number_of_action)*epsilon/number_of_actionstates probabilities[np.argmax(q_values)] += 1 - epsilon return probabilities np.dot([.1, .2, .3, .4], compute_action_probabilities([.1, .2, .3, .4], 0.04, 4)) def compute_action_probabilities(q_values, epsilon, number_of_action): probabilities = np.ones(number_of_action)*epsilon/number_of_action probabilities[np.argmax(q_values)] += 1 - epsilon return probabilities def expected_sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=.003, eps_decay=.99, eps_min=0.05, plot_every=100): # initialize action-value function (empty dictionary of arrays) number_of_action = env.nA epsilon = eps_start Q = defaultdict(lambda: np.zeros(number_of_action)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress state = env.reset() action = generate_action(state, Q, epsilon, number_of_action) score = 0 while True: next_state, reward, done, info = env.step(action) score += reward if not done: # notice there is no next_action. actions in this method are all based on epsilon greedy # epsilon = 1.0 / i_episode action_probabilities = compute_action_probabilities(Q[next_state], epsilon, number_of_action) Qsa_next = np.sum(np.dot(action_probabilities, Q[next_state])) Q[state][action] += alpha*(reward + gamma*Qsa_next - Q[state][action]) # action for next round. action = generate_action(next_state, Q, epsilon, number_of_action) state = next_state # action = next_action if done: # this means you reached the end state. The last action you did, took you to the end state 47 # state = state before the end state # next_state = 47 # action that got you to 47 # There is no next state and Q function of the reward state is same as the reward you recieve in that state Qsa_next = 0 Q[state][action] += alpha*(reward + gamma*Qsa_next - Q[state][action]) # adding the total accumulated reward as 1 element of a list to the tmp_scores tmp_scores.append(score) break if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) print(env.nA) np.zeros(env.nA) ###Output Discrete(4) Discrete(48) 4 ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function. ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q(env, Q, epsilon, nA, alpha, gamma): """ generates an episode from following the epsilon-greedy policy """ state = env.reset() #reset the env and output the starting position action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) if state in Q else env.action_space.sample() while True: next_state, reward, done, info = env.step(action) if done: #Q[state][action] = updateQscore(Q[state][action], 0, reward, alpha, gamma) break else: next_action = np.random.choice(np.arange(nA), p=get_probs(Q[next_state], epsilon, nA)) \ if next_state in Q else env.action_space.sample() # update the action-value function estimate from the next move Q[state][action] = updateQscore(Q[state][action], Q[next_state][next_action], reward, alpha, gamma) # t <- t+1 state = next_state action = next_action return Q def updateQscore(Qsa, Qsa_next, reward, alpha, gamma): """ updates the action-value function estimate using the most recent time step """ return Qsa + (alpha * (reward + (gamma * Qsa_next) - Qsa)) def get_probs(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.9999, eps_min=0.05): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) epsilon = eps_start # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # set the value of epsilon #epsilon = max(epsilon*eps_decay, eps_min) epsilon = 1.0 / i_episode # generate an episode by following epsilon-greedy policy Q = update_Q(env, Q, epsilon, env.nA, alpha, gamma) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q(env, Q, epsilon, nA, alpha, gamma): """ generates an episode from following the epsilon-greedy policy """ state = env.reset() #reset the env and output the starting position action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) if state in Q else env.action_space.sample() while True: next_state, reward, done, info = env.step(action) if done: #Q[state][action] = updateQscore(Q[state][action], 0, reward, alpha, gamma) break else: next_action = np.random.choice(np.arange(nA), p=get_probs(Q[next_state], epsilon, nA)) \ if next_state in Q else env.action_space.sample() # update the action-value function estimate from the next move Q[state][action] = updateQscore(Q[state][action], Q[next_state], reward, alpha, gamma) # t <- t+1 state = next_state action = next_action return Q def updateQscore(Qsa, Qs_next, reward, alpha, gamma): """ updates the action-value function estimate using the most recent time step """ return Qsa + (alpha * (reward + (gamma * np.max(Qs_next)) - Qsa)) def get_probs(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # set the value of epsilon #epsilon = max(epsilon*eps_decay, eps_min) epsilon = 1.0 / i_episode # generate an episode by following epsilon-greedy policy Q = update_Q(env, Q, epsilon, env.nA, alpha, gamma) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q(env, Q, epsilon, nA, alpha, gamma): """ generates an episode from following the epsilon-greedy policy """ state = env.reset() #reset the env and output the starting position policy = get_probs(Q[state], epsilon, nA) action = np.random.choice(np.arange(nA), p=policy) if state in Q else env.action_space.sample() while True: next_state, reward, done, info = env.step(action) if done: #Q[state][action] = updateQscore(Q[state][action], 0, 0, reward, alpha, gamma) break else: policy = get_probs(Q[next_state], epsilon, nA) next_action = np.random.choice(np.arange(nA), p=policy) if next_state in Q else env.action_space.sample() # update the action-value function estimate from the next move Q[state][action] = updateQscore(Q[state][action], Q[next_state], policy, reward, alpha, gamma) # t <- t+1 state = next_state action = next_action return Q def updateQscore(Qsa, Qs_next, policy, reward, alpha, gamma): """ updates the action-value function estimate using the most recent time step """ return Qsa + (alpha * (reward + (gamma * np.dot(policy, Qs_next)) - Qsa)) def get_probs(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # set the value of epsilon #epsilon = max(epsilon*eps_decay, eps_min) #epsilon = 1.0 / i_episode epsilon = 0.005 # generate an episode by following epsilon-greedy policy Q = update_Q(env, Q, epsilon, env.nA, alpha, gamma) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) print(env.action_space.n) ###Output Discrete(4) Discrete(48) 4 ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def generate_cliff_action_eps_greedy(Q, state, env, eps): if np.random.random() > eps: return generate_cliff_action_greedy(Q, state) else: return generate_cliff_action_random(env) def generate_cliff_action_random(env): probs = 1./env.action_space.n * np.ones(env.action_space.n) action = np.random.choice(np.arange(4), p=probs) return action def generate_cliff_action_greedy(Q, state): return np.argmax(Q[state]) def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) env.nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() St = env.reset() while True: eps = 1/i_episode At = generate_cliff_action_eps_greedy(Q, St, env, eps) St1, Rt1, done, info = env.step(At) if done: Q[St][At] += alpha*(Rt1-Q[St][At]) break; At1 = generate_cliff_action_eps_greedy(Q, St1, env, eps) Q[St][At] += alpha*(Rt1+gamma*Q[St1][At1]-Q[St][At]) At = At1 St = St1 return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 10000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 10000/10000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays env.nA = env.action_space.n print(env.nA) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() St = env.reset() eps = 1./i_episode while True: At = generate_cliff_action_eps_greedy(Q, St, env, eps) St1, Rt1, done, info = env.step(At) if done: Q[St][At] += alpha*(Rt1-Q[St][At]) break; Qs_max = np.max(Q[St1]) Q[St][At] += alpha*(Rt1+gamma*Qs_max-Q[St][At]) St = St1 return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output 4 Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_mean(Qs, nA, eps): prob = np.ones(nA) * eps/nA prob[np.argmax(Qs)] += (1-eps) #print(prob, Qs) return np.dot(prob,Qs) def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() St = env.reset() eps = 0.005 while True: At = generate_cliff_action_eps_greedy(Q, St, env, eps) St1, Rt1, done, info = env.step(At) if done: Q[St][At] += alpha*(Rt1-Q[St][At]) break; #print('Q', Q) Qs_mean = get_mean(Q[St1],nA, eps) #print(Qs_mean) Q[St][At] += alpha*(Rt1+gamma*Qs_mean-Q[St][At]) St = St1 return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random as rn from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def greedy_eps_action(Q, state, nA, eps): if rn.random()> eps: return np.argmax(Q[state]) else: return rn.choice(np.arange(nA)) def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start = 1, eps_decay = .95, eps_min = 1e-2): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor # loop over episodes eps = eps_start for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = max(eps_min,eps*eps_decay) state = env.reset() score = 0 action = greedy_eps_action(Q, state, nA, eps) while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = greedy_eps_action(Q, next_state, nA, eps) this_V = Q[state][action] next_V = Q[next_state][next_action] Q[state][action] = this_V + alpha*(reward + gamma*next_V - this_V) state = next_state action = next_action if done: Q[state][action] = Q[state][action] + alpha*(reward - Q[state][action]) tmp_scores.append(score) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start = 1, eps_decay = .95, eps_min = 1e-2): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) eps = eps_start # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = max(eps_min,eps*eps_decay) state = env.reset() score = 0 action = greedy_eps_action(Q, state, nA, eps) while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = greedy_eps_action(Q, next_state, nA, eps) this_V = Q[state][action] next_V = Q[next_state][next_action] Q[state][action] = this_V + alpha*(reward + gamma*max(Q[next_state]) - this_V) state = next_state action = next_action if done: Q[state][action] = Q[state][action] + alpha*(reward - Q[state][action]) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0, eps_start = 1, eps_decay = .9, eps_min = 1e-2): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) eps = eps_start # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = .001 state = env.reset() score = 0 action = greedy_eps_action(Q, state, nA, eps) while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = greedy_eps_action(Q, next_state, nA, eps) this_V = Q[state][action] prob_s = np.ones(nA)*eps/nA prob_s[np.argmax(Q[next_state])] = 1 - eps + eps/nA Q[state][action] = this_V + alpha*(reward + gamma*np.dot(Q[next_state], prob_s) - this_V) state = next_state action = next_action if done: Q[state][action] = Q[state][action] + alpha*(reward - Q[state][action]) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import random import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /Users/postbg/.virtualenvs/drlnd/lib/python3.6/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def choose_epsilon_greedy_action(action_values, epsilon, nA): if np.random.random() > epsilon: return np.argmax(action_values) return np.random.randint(0, nA) def update_Q_sarsa(Q, state, action, reward, alpha, gamma, next_state=None, next_action=None): Qsa_next = Q[next_state][next_action] if next_state is not None else 0. return Q[state][action] + alpha * (reward + gamma * Qsa_next - Q[state][action]) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize action-value function (empty dictionary of arrays) nA = env.nA Q = defaultdict(lambda: np.zeros(env.nA)) # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes + 1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 epsilon = 1. / i_episode state = env.reset() action = choose_epsilon_greedy_action(Q[state], epsilon, nA) while True: next_state, reward, done, info = env.step(action) score += reward # for monitoring score if done: Q[state][action] = update_Q_sarsa(Q, state, action, reward, alpha, gamma) tmp_scores.append(score) # for monitoring score break next_action = choose_epsilon_greedy_action(Q[next_state], epsilon, nA) Q[state][action] = update_Q_sarsa(Q, state, action, reward, alpha, gamma, next_state, next_action) state = next_state action = next_action if (i_episode % plot_every == 0): # for monitoring score avg_scores.append(np.mean(tmp_scores)) plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa_max(Q, state, action, reward, alpha, gamma, next_state=None): Qsa_next = np.max(Q[next_state]) if next_state is not None else 0. return Q[state][action] + alpha * (reward + gamma * Qsa_next - Q[state][action]) def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) nA = env.nA Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes + 1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 1 / i_episode state = env.reset() while True: action = choose_epsilon_greedy_action(Q[state], epsilon, nA) next_state, reward, done, info = env.step(action) Q[state][action] = update_Q_sarsa_max(Q, state, action, reward, alpha, gamma, next_state) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_epsilon_greedy_prob(action_values, epsilon): # Greedy Action이 여러 개인 경우도 고려 nA = action_values.size greedy_action_indicator = (action_values >= np.max(action_values)).astype(np.float) num_greedy_action = np.sum(greedy_action_indicator) normed_greedy_action_prob = (1 - epsilon) / num_greedy_action epsilon_greedy_prob = (greedy_action_indicator * normed_greedy_action_prob) + (epsilon / nA) return epsilon_greedy_prob def update_Q_expected_sarsa(Q, state, action, reward, alpha, gamma, epsilon, next_state=None): epsilon_greedy_action_prob = get_epsilon_greedy_prob(Q[next_state], epsilon) Qsa_next = np.dot(epsilon_greedy_action_prob, Q[next_state]) return Q[state][action] + alpha * (reward + gamma * Qsa_next - Q[state][action]) def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) nA = env.nA Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes + 1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 0.005 state = env.reset() while True: action = choose_epsilon_greedy_action(Q[state], epsilon, nA) next_state, reward, done, info = env.step(action) Q[state][action] = update_Q_expected_sarsa(Q, state, action, reward, alpha, gamma, epsilon, next_state) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output c:\users\1\anaconda3\envs\qb_ml\lib\site-packages\matplotlib\cbook\__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def eps_greedy(Q,state,nA,eps): if random.random()>eps: return np.argmax(Q[state]) else: return random.choice(np.arange(nA)) def update_Q_sarsa(alpha,gamma,Q,state,action,reward,next_state=None,next_action=None): current=Q[state][action] if next_state is not None: Qsa_next=Q[next_state][next_action] else: Qsa_next=0.0 target=reward+gamma*Qsa_next new_reward=current+alpha*(target-current) return new_reward def sarsa(env, num_episodes, alpha, gamma=1.0,plot_every=100): # initialize action-value function (empty dictionary of arrays) nA=env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor tmp_scores=deque(maxlen=plot_every) avg_scores=deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progressd if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score=0 state=env.reset() eps=1.0/i_episode action=eps_greedy(Q,state,nA,eps) while True: next_state,reward,done,info=env.step(action) score+=reward if not done: next_action=eps_greedy(Q,next_state,nA,eps) Q[state][action]=update_Q_sarsa(alpha,gamma,Q,state,action,reward,next_state,next_action) state=next_state action=next_action if done: Q[state][action]=update_Q_sarsa(alpha,gamma,Q,state,action,reward) tmp_scores.append(score) break if(i_episode%plot_every==0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def eps_greedy(Q,state,nA,eps): if random.random()>eps: return np.argmax(Q[state]) else: return random.choice(np.arange(nA)) def update_Q_sarsamax(alpha,gamma,Q,state,action,reward,next_state=None): current=Q[state][action] if next_state is not None: Qsa_next=np.max(Q[next_state]) else: Qsa_next=0.0 target=reward+gamma*Qsa_next new_reward=current+alpha*(target-current) return new_reward def q_learning(env, num_episodes, alpha, gamma=1.0,plot_every=100): # initialize empty dictionary of arrays nA=env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor tmp_scores=deque(maxlen=plot_every) avg_scores=deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progressd if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score=0 state=env.reset() eps=1.0/i_episode while True: action=eps_greedy(Q,state,nA,eps) next_state,reward,done,info=env.step(action) score+=reward if not done: Q[state][action]=update_Q_sarsamax(alpha,gamma,Q,state,action,reward,next_state) state=next_state if done: Q[state][action]=update_Q_sarsamax(alpha,gamma,Q,state,action,reward) tmp_scores.append(score) break if(i_episode%plot_every==0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def eps_greedy(Q,state,nA,eps): if random.random()>eps: return np.argmax(Q[state]) else: return random.choice(np.arange(nA)) def update_Q_expsarsa(alpha,gamma,nA,Q,eps,state,action,reward,next_state=None): current=Q[state][action] if next_state is not None: policy_s=np.ones(nA)*eps/nA policy_s[np.argmax(Q[next_state])]=1-eps+eps/nA Qsa_next=np.dot(policy_s,Q[next_state]) else: Qsa_next=0.0 target=reward+gamma*Qsa_next new_reward=current+alpha*(target-current) return new_reward def expected_sarsa(env, num_episodes, alpha, gamma=1.0,plot_every=100): # initialize empty dictionary of arrays nA=env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor tmp_scores=deque(maxlen=plot_every) avg_scores=deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progressd if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score=0 state=env.reset() #eps=max(1.0/i_episode,0.005) eps=0.005 while True: action=eps_greedy(Q,state,nA,eps) next_state,reward,done,info=env.step(action) score+=reward if not done: Q[state][action]=update_Q_expsarsa(alpha,gamma,nA,Q,eps,state,action,reward,next_state) state=next_state if done: Q[state][action]=update_Q_expsarsa(alpha,gamma,nA,Q,eps,state,action,reward) tmp_scores.append(score) break if(i_episode%plot_every==0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random import math from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /usr/lib64/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /usr/lib64/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /usr/lib64/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /usr/lib64/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(Q, state, nA, eps): policy = np.ones(nA)*(eps/nA) best = np.argmax(Q[state]) policy[best] += (1-eps) return np.random.choice(np.arange(nA), p=policy) def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): Qsa = Q[state][action] if(next_state == None or next_state == None): Qsa_next = 0 else: Qsa_next = Q[next_state][next_action] new_value = Qsa + alpha*(reward + gamma*Qsa_next - Qsa) return new_value def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor tmp_score = [] avg_score_sarsa = [] plot_every = 100 e = 0.01 e_decay = 0.9 e_min = 0.01 # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 1 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() #e = max(e*e_decay, e_min) score = 0 state = env.reset() action = epsilon_greedy(Q, state, env.nA, e) while True: next_state, reward, done, info = env.step(action) score += reward if(not done): next_action = epsilon_greedy(Q, next_state, env.nA, e) Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state, next_action) state = next_state action = next_action else: tmp_score.append(score) Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward) break if (i_episode % plot_every == 0): avg_score_sarsa.append(np.mean(score)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_score_sarsa),endpoint=False), np.asarray(avg_score_sarsa)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_score_sarsa)) return Q, avg_score_sarsa ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa, avg_score_sarsa = sarsa(env, 10000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 10000/10000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_learning(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): Qsa = Q[state][action] if(next_state == None or next_state == None): Qsa_next = 0 else: Qsa_next = max(Q[next_state]) new_value = Qsa + alpha*(reward + gamma*Qsa_next - Qsa) return new_value def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor tmp_score = [] avg_score_q = [] plot_every = 100 e = 0.01 e_decay = 0.9 e_min = 0.01 # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 1 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() #e = max(e*e_decay, e_min) score = 0 state = env.reset() action = epsilon_greedy(Q, state, env.nA, e) while True: next_state, reward, done, info = env.step(action) score += reward if(not done): next_action = epsilon_greedy(Q, next_state, env.nA, e) Q[state][action] = update_Q_learning(alpha, gamma, Q, state, action, reward, next_state, next_action) state = next_state action = next_action else: tmp_score.append(score) Q[state][action] = update_Q_learning(alpha, gamma, Q, state, action, reward) break if (i_episode % plot_every == 0): avg_score_q.append(np.mean(score)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_score_q),endpoint=False), np.asarray(avg_score_q)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_score_q)) return Q, avg_score_q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax,avg_score_q = q_learning(env, 10000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expected_sarsa(alpha, gamma, Q, state, action, reward, nA, eps, next_state=None, next_action=None): policy = np.ones(nA)*(eps/nA) best = np.argmax(Q[state]) policy[best] += (1-eps) Qsa = Q[state][action] Qsa_next = np.dot(Q[next_state], policy) new_value = Qsa + alpha*(reward + gamma*Qsa_next - Qsa) return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor tmp_score = [] avg_score_expsarsa = [] plot_every = 100 e = 0.01 e_decay = 0.8 e_min = 0.005 # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 1 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() #e = max(e*e_decay, e_min) score = 0 state = env.reset() while True: action = epsilon_greedy(Q, state, env.nA, e) next_state, reward, done, info = env.step(action) score += reward if(not done): Q[state][action] = update_Q_expected_sarsa(alpha, gamma, Q, state, action, reward, env.nA, e, next_state) state = next_state else: tmp_score.append(score) break if (i_episode % plot_every == 0): avg_score_expsarsa.append(np.mean(score)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_score_expsarsa),endpoint=False), np.asarray(avg_score_expsarsa)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_score_expsarsa)) return Q, avg_score_expsarsa ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa, avg_score_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown ExtraComparisson between the 3 to check their convergence speeds ###Code num_episodes = 10000 plt.plot(np.asarray(avg_score_sarsa), label="sarsa") plt.plot(np.asarray(avg_score_q), label="q-learning") plt.plot(np.asarray(avg_score_expsarsa), label="exp-sarsa") plt.legend() plt.show() avg_score_expsarsa ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import random import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(Q, state, nA, eps): #epsilon greedy action selection if random.random() > eps: return np.argmax(Q[state]) else: return random.choice(np.arange(nA)) def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): #Returns updated Q-value for the most recent experience. Qsa_next = Q[next_state][next_action] if next_state is not None else 0 return Q[state][action] + alpha*(reward + gamma*Qsa_next-Q[state][action]) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every = 100): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) avg_scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() eps = 1.0/i_episode action = epsilon_greedy(Q, state, env.nA, eps) while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = epsilon_greedy(Q, next_state, env.nA, eps) Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state, next_action) state = next_state action = next_action if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward) tmp_scores.append(score) break if i_episode%plot_every == 0: avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 10000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 10000/10000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): # returns updated Q-value for the most recent experience Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 return Q[state][action] + alpha*(reward + gamma*Qsa_next - Q[state][action]) def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) avg_scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() eps = 1.0/i_episode while True: action = epsilon_greedy(Q, state, env.nA, eps) next_state, reward, done, info = env.step(action) score += reward Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state) state = next_state if done: tmp_scores.append(score) break if i_episode%plot_every == 0: avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 10000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state): # returns updated Q-value for the most recent experience policy_s = np.ones(nA)*eps/nA policy_s[np.argmax(Q[next_state])] = policy_s[np.argmax(Q[next_state])] + (1-eps) Qsa_next = np.dot(Q[next_state], policy_s) return Q[state][action] + alpha*(reward + gamma*Qsa_next - Q[state][action]) def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) avg_scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() eps = 0.005 while True: action = epsilon_greedy(Q, state, env.nA, eps) next_state, reward, done, info = env.step(action) score += reward Q[state][action] = update_Q_expsarsa(alpha, gamma, env.nA, eps, Q, state, action, reward, next_state) state = next_state if done: tmp_scores.append(score) break if i_episode%plot_every == 0: avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def sarsa(env, num_episodes, alpha, gamma=1.0): eps = 0.4 # encourages more exploration at the beggining as i -> inf, eps ~ 0 # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # tracks sums of rewards for each episode to plot later episodic_sum_of_rewards = [] for i_episode in range(1, num_episodes + 1): sum_of_rewards = 0 if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() eps = eps / i_episode state = env.reset() # choose action based on eps-greedy policy probs = np.ones(env.nA) * (eps / env.nA) probs[np.argmax(Q[state])] = (1 - eps) + (eps / env.nA) action = np.random.choice(np.arange(env.nA), p=probs) while True: next_state, reward, done, info = env.step(action) sum_of_rewards += reward next_action = np.argmax(Q[next_state]) Q[state][action] += alpha * ( reward + gamma * Q[next_state][next_action] - Q[state][action] ) state = next_state action = next_action if done: episodic_sum_of_rewards.append(sum_of_rewards) break return Q, episodic_sum_of_rewards ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function num_episodes = 5000 alpha = 0.01 Q_sarsa, sarsa_sum_of_rewards = sarsa(env, num_episodes, alpha) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): eps = 0.4 # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # tracks sums of rewards for each episode to plot later episodic_sum_of_rewards = [] for i_episode in range(1, num_episodes + 1): sum_of_rewards = 0 if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() eps /= i_episode state = env.reset() while True: # choose eps-greedy action (exploration versus exploitation) non_optimal_prob = eps / env.nA probs = np.ones(env.nA) * non_optimal_prob probs[np.argmax(Q[state])] = (1 - eps) + non_optimal_prob action = np.random.choice(np.arange(env.nA), p=probs) next_state, reward, done, info = env.step(action) sum_of_rewards += reward Q[state][action] += alpha * ( reward + gamma * np.max(Q[next_state]) - Q[state][action] ) state = next_state if done: episodic_sum_of_rewards.append(sum_of_rewards) break return Q, episodic_sum_of_rewards ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function num_episodes = 5000 alpha = 0.01 Q_sarsamax, sarsamax_sum_of_rewards = q_learning(env, num_episodes, alpha) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) # Plot the rewards plt.xlabel('episode') plt.ylabel('Sum of rewards') plt.ylim(0, -500) plt.xlim(0, 1000) plt.plot(np.arange(1, len(sarsa_sum_of_rewards) + 1), sarsa_sum_of_rewards, color='blue', label='sarsa', alpha=1) plt.plot(np.arange(1, len(sarsamax_sum_of_rewards) + 1), sarsamax_sum_of_rewards, color='red', label='sarsamax', alpha=0.6) plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def draw_eps_greedy_probs(env, q_value, eps): non_optimal_prob = eps / env.nA probs = np.ones(env.nA) * non_optimal_prob probs[np.argmax(q_value)] = (1 - eps) + non_optimal_prob return probs def expected_sarsa(env, num_episodes, alpha, gamma=1.0): eps = 0.4 # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # tracks sums of rewards for each episode to plot later episodic_sum_of_rewards = [] for i_episode in range(1, num_episodes+1): sum_of_rewards = 0 if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() eps /= i_episode state = env.reset() while True: probs = draw_eps_greedy_probs(env, Q[state], eps) action = np.random.choice(np.arange(env.nA), p=probs) next_state, reward, done, info = env.step(action) sum_of_rewards += reward probs = draw_eps_greedy_probs(env, Q[next_state], eps) Q[state][action]+= alpha * (reward + gamma * probs.dot(Q[next_state].T) - Q[state][action]) state = next_state if done: episodic_sum_of_rewards.append(sum_of_rewards) break return Q, episodic_sum_of_rewards ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function num_episodes = 5000 alpha = 0.01 Q_expsarsa, exp_sarsa_sum_of_rewards = expected_sarsa(env, num_episodes, alpha) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4, 12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) # Plot the rewards plt.xlabel('episode') plt.ylabel('Sum of rewards') plt.ylim(-500, 0) plt.xlim(0, 800) x = np.linspace(0, 800, 50).astype(np.int) sarsa_y = [ sarsa_sum_of_rewards[val] for val in x ] sarsamax_y = [ sarsamax_sum_of_rewards[val] for val in x ] exp_sarsa = [ exp_sarsa_sum_of_rewards[val] for val in x ] plt.plot(x, sarsa_y, color='blue', label='sarsa', alpha=1) plt.plot(x, sarsamax_y, color='red', label='sarsamax', alpha=0.7) plt.plot(x, exp_sarsa, color='green', label='exp_sarsa', alpha=0.7) plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output _____no_output_____ ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code from scipy.stats import bernoulli def epsilon_greedy_action(state, Q, epsilon, env): if bernoulli.rvs(1 - epsilon): # Exploit. Take the greedy action action = np.argmax(Q[state]) else: # Explore. Sample uniformly from all actions action = env.action_space.sample() return action def update_Q_sarsa(env, Q, state, action, reward, next_state, next_action, alpha, gamma): old_Q = Q[state][action] Q[state][action] = old_Q + alpha * (reward + gamma * Q[next_state][next_action] - old_Q) return Q def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # Decay epsion linearly, up to a minimum of 0.1 epsilon = max((num_episodes - i_episode) / num_episodes, 0.1) state = env.reset() action = epsilon_greedy_action(state, Q, epsilon, env) while True: next_state, reward, done, info = env.step(action) next_action = epsilon_greedy_action(next_state, Q, epsilon, env) Q = update_Q(env, Q, state, action, reward, next_state, next_action, alpha, gamma) state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa_max(env, Q, state, action, reward, next_state, alpha, gamma): old_Q = Q[state][action] next_action = np.argmax(Q[next_state]) Q[state][action] = old_Q + alpha * (reward + gamma * Q[next_state][next_action] - old_Q) return Q def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = max((num_episodes - i_episode) / num_episodes, 0.1) state = env.reset() action = epsilon_greedy_action(state, Q, epsilon, env) while True: next_state, reward, done, info = env.step(action) next_action = epsilon_greedy_action(next_state, Q, epsilon, env) Q = update_Q_sarsa_max(env, Q, state, action, reward, next_state, alpha, gamma) state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy_probabilities(state, Q, epsilon, env): nA = env.action_space.n policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q[state]) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s def update_Q_expected_sarsa(env, Q, state, action, reward, next_state, alpha, gamma, epsilon): old_Q = Q[state][action] action_probabs = epsilon_greedy_probabilities(next_state, Q, epsilon, env) expected_value = np.dot(Q[next_state], action_probabs) Q[state][action] = old_Q + alpha * (reward + gamma * expected_value - old_Q) return Q def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = max((num_episodes - i_episode) / num_episodes, 0.1) state = env.reset() action = epsilon_greedy_action(state, Q, epsilon, env) while True: next_state, reward, done, info = env.step(action) next_action = epsilon_greedy_action(next_state, Q, epsilon, env) Q = update_Q_expected_sarsa(env, Q, state, action, reward, next_state, alpha, gamma, epsilon) state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code env.step(0) def get_probabilities(action_values, epsilon): n = action_values.size p = np.full(shape=(n,), fill_value=epsilon/action_values.size) p[np.argmax(action_values)] = 1 - epsilon + epsilon / action_values.size return p def epsilon_greedy(env, state, Q, epsilon): n_a = env.nA action = np.random.choice(np.arange(n_a), p=get_probabilities(Q[state], epsilon)) \ if state in Q else env.action_space.sample() return action def sarsa(env, num_episodes, alpha, gamma=1.0, epsilon=1.0, epsilon_decay=.01, epsilon_min=0.001): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon_i = max(epsilon * np.exp(-i_episode * epsilon_decay), epsilon_min) state = env.reset() a_t = epsilon_greedy(env, state, Q, epsilon_i) while True: s_t_1, r_t_1, done, _ = env.step(a_t) if done: Q[state][a_t] = Q[state][a_t] + alpha * (r_t_1 - Q[state][a_t]) break else: a_t_1 = epsilon_greedy(env, s_t_1, Q, epsilon_i) Q[state][a_t] = Q[state][a_t] + alpha * (r_t_1 + gamma * Q[s_t_1][a_t_1] - Q[state][a_t]) state = s_t_1 a_t = a_t_1 return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function # Note: in this case using exponential decay doesn't work too well # it seems like the agent doesn't need a lot of exploration and can quickly # find the best Q values without a lot of random sampling of the environment # hence the rapid decay and very low epsilon_min value Q_sarsa = sarsa(env, 5000, .01, epsilon_decay=0.1, epsilon_min=0.0001) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0, epsilon=1.0, epsilon_decay=.01, epsilon_min=0.0001): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon_i = max(epsilon * np.exp(-i_episode * epsilon_decay), epsilon_min) state = env.reset() a_t = epsilon_greedy(env, state, Q, epsilon_i) while True: s_t_1, r_t_1, done, _ = env.step(a_t) if done: # this term is the same because the Q value of the final state # always decays to 0 Q[state][a_t] = Q[state][a_t] + alpha * (r_t_1 - Q[state][a_t]) break else: a_t_1 = epsilon_greedy(env, s_t_1, Q, epsilon_i) # only change from SARSA is in the term gamma * Q[s_t_1][a_t_1] to # gamma * Q[s_t_1].max() -> we always choose the best possible action # to improve Q value, rather than the epsilon greedy sampled action Q[state][a_t] = Q[state][a_t] + alpha * (r_t_1 + gamma * Q[s_t_1].max() - Q[state][a_t]) state = s_t_1 a_t = a_t_1 return Q ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random import math from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code # unused def epsilon_greedy_probs(Q_s, epsilon, nA): """Returns a list of the probabilities of nA ordered actions Q_s for the implementation of an epsilon greedy policy, at a particular state s. The action with the highest value in Q_s is chosen as the greedy action a* and its probability is set at 1-e+e/nA. The probabilities for all other actions are set at e/nA.""" policy_probs = np.ones(nA) * epsilon / nA arg_a_star = np.argmax(Q_s) policy_probs[arg_a_star] = 1 - epsilon + epsilon / nA return policy_probs #███╗ ██╗ ██████╗ ████████╗███████╗ #████╗ ██║██╔═══██╗╚══██╔══╝██╔════╝ #██╔██╗ ██║██║ ██║ ██║ █████╗ #██║╚██╗██║██║ ██║ ██║ ██╔══╝ #██║ ╚████║╚██████╔╝ ██║ ███████╗ #╚═╝ ╚═══╝ ╚═════╝ ╚═╝ ╚══════╝ # http://patorjk.com/software/taag/#p=display&f=ANSI%20Shadow&t=NOTE # # 1. Programmatically, the easiest greedy-action choice looks like # the following function, where the condition is # if random.random() > epsilon: # return np.argmax(Q[state]) def choose_greedy_action(Q, state, epsilon, nA): # return np.random.choice(np.arange(nA), p=epsilon_greedy_probs(Q[state], epsilon, nA)) \ # if state in Q else env.action_space.sample() if random.random() > epsilon: return np.argmax(Q[state]) else: # return env.action_space.sample() return np.random.choice(np.arange(nA)) # 2. The mandatory anomalous case of reaching done=True and having # no state_next and action_next is handled by using the # convention that the terminal state is 0 (see Udacity cheat sheet: # https://github.com/udacity/deep-reinforcement-learning/blob/master/cheatsheet/cheatsheet.pdf, # Algorithm 13: Sarsa) # # Programmatically, this is done # - using the defaultdict lambda of np.zeros(nA), # - default argument value None for state_next and action_next, and # - a local conditional on state_next with default 0.0 (TODO: Is it necessary?) def update_Q_sarsa(alpha, gamma, Q, state, action, reward, state_next=None, action_next=None): curr_estimate = Q[state][action] next_state_estimate = Q[state_next][action_next] if state_next in Q else 0.0 sarsa_estimate = reward + gamma * next_state_estimate # sarsa update updated_estimate = curr_estimate + alpha * (sarsa_estimate - curr_estimate) return updated_estimate def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # ?? does the terminal state have to be initialized? # number of actions nA = env.action_space.n # initialize performance monitor - ?? # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # start a new episode state = env.reset() # set epsilon for the episode epsilon = 1.0 / i_episode # pick action according to e-greedy policy action = choose_greedy_action(Q, state, epsilon, nA) # episode steps loop here while True: # take a step with the last action state_next, reward, done, info = env.step(action) if not done: # pick the next action action_next = choose_greedy_action(Q, state_next, epsilon, nA) Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, state_next, action_next) state = state_next action = action_next else: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def choose_greedy_action(Q, state, epsilon, nA): if random.random() > epsilon: return np.argmax(Q[state]) else: return np.random.choice(np.arange(nA)) def update_Q_learning(alpha, gamma, Q, state, action, reward, state_next=None, action_next=None): curr_estimate = Q[state][action] greedy_action = np.argmax(Q[state_next]) if state_next in Q else -1 next_state_estimate = Q[state_next][greedy_action] if state_next in Q else 0.0 sarsamax_estimate = reward + gamma * next_state_estimate # sarsamax update updated_estimate = curr_estimate + alpha * (sarsamax_estimate - curr_estimate) return updated_estimate def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) nA = env.action_space.n # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # start a new episode state = env.reset() # set epsilon for the episode epsilon = 1.0 / i_episode # pick action according to e-greedy policy action = choose_greedy_action(Q, state, epsilon, nA) # episode steps loop here while True: # take a step with the last action state_next, reward, done, info = env.step(action) if not done: # pick the next action action_next = choose_greedy_action(Q, state_next, epsilon, nA) Q[state][action] = update_Q_learning(alpha, gamma, Q, state, action, reward, state_next, action_next) state = state_next action = action_next else: Q[state][action] = update_Q_learning(alpha, gamma, Q, state, action, reward) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def choose_greedy_action(Q, state, epsilon, nA): if random.random() > epsilon: return np.argmax(Q[state]) else: return np.random.choice(np.arange(nA)) #███╗ ██╗ ██████╗ ████████╗███████╗ #████╗ ██║██╔═══██╗╚══██╔══╝██╔════╝ #██╔██╗ ██║██║ ██║ ██║ █████╗ #██║╚██╗██║██║ ██║ ██║ ██╔══╝ #██║ ╚████║╚██████╔╝ ██║ ███████╗ #╚═╝ ╚═══╝ ╚═════╝ ╚═╝ ╚══════╝ # http://patorjk.com/software/taag/#p=display&f=ANSI%20Shadow&t=NOTE # # 1. The probabilities used in the expectation sum are the ones that # result from the stochasticity of the (epsilon-greedy) policy # and not the stochasticity of the environment. That is, the # stochasticity comes from the agent rather than the environment. # # 2. No need for action_next, as all are considered. # # 3. Too small an epsilon may result in overflow and mess up arbitrary # state value calculations. def update_Q_sarsaexp(alpha, gamma, nA, epsilon, Q, state, action, reward, state_next=None): curr_estimate = Q[state][action] greedy_probs = np.ones(nA) * epsilon / nA greedy_probs[np.argmax(Q[state_next])] = 1 - epsilon + epsilon / nA next_state_estimate = np.dot(Q[state_next], greedy_probs) sarsaexp_estimate = reward + gamma * next_state_estimate # sarsaexp update updated_estimate = curr_estimate + alpha * (sarsaexp_estimate - curr_estimate) return updated_estimate def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) nA = env.action_space.n # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() #████████╗ ██████╗ ██████╗ ██████╗ #╚══██╔══╝██╔═══██╗██╔══██╗██╔═══██╗ # ██║ ██║ ██║██║ ██║██║ ██║ # ██║ ██║ ██║██║ ██║██║ ██║ # ██║ ╚██████╔╝██████╔╝╚██████╔╝ # ╚═╝ ╚═════╝ ╚═════╝ ╚═════╝ # # http://patorjk.com/software/taag/#p=display&f=ANSI%20Shadow&t=TODO # # 1. Why the sensitivity to epsilon?!? Several states get almost # scrambled values as a result, and they are different depending # on the value of epsilon. ## TODO: complete the function # epsilon = 1.0 / i_episode # doesn't work # epsilon = max(epsilon, 0.005) # doesn't work epsilon = 0.0001 # works # epsilon = max(1.0 / i_episode, 0.005) # doesn't work state = env.reset() while True: action = choose_greedy_action(Q, state, epsilon, nA) state_next, reward, done, info = env.step(action) Q[state][action] = update_Q_sarsaexp(alpha, gamma, nA, epsilon, Q, state, action, reward, state_next) state = state_next if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output /home/brabeem/anaconda3/lib/python3.7/site-packages/ale_py/roms/utils.py:90: DeprecationWarning: SelectableGroups dict interface is deprecated. Use select. for external in metadata.entry_points().get(self.group, []): ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code print(np.random.random()) def epsilon_greedy(Q,state,epsilon,nA): if np.random.random() > epsilon: return np.argmax(Q[state]) else: return np.random.choice(np.arange(nA)) def update_Q_sarsa(Q,state,reward,action,next_state,next_action,gamma,alpha): current = Q[state][action] qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + gamma * qsa_next new_value = current + alpha*(target - current) return new_value def sarsa(env, num_episodes, alpha, gamma=1.0,epsilon_min=.05,epsilon_decay=0.9999): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes epsilon = 1 scores = [] avg_score = [] for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() epsilon = 1/i_episode action = epsilon_greedy(Q,state,epsilon,nA=env.action_space.n) score = 0 while True: next_state,reward,done,info = env.step(action) score += reward if done is False: next_action = epsilon_greedy(Q,next_state,epsilon,nA=env.action_space.n) new_value = update_Q_sarsa(Q,state,reward,action,next_state,next_action,gamma,alpha) Q[state][action] = new_value state = next_state action = next_action if done is True: update_Q_sarsa(Q,state,reward,action,next_state,next_action,gamma,alpha) break scores.append(score) avg_score.append(0.01*np.sum(np.array(scores[:100]))) plt.plot(np.arange(len(scores)),scores) #plt.plot(avg_score) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_q_sarsamax(q,state,action,alpha,next_state,gamma,reward): current = q[state][action] qsa_next = max(q[next_state]) if next_state is not None else 0 target = reward + gamma * qsa_next new_value = q[state][action] + alpha * (target - current) return new_value def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 1/i_episode state = env.reset() while True: action = epsilon_greedy(Q,state,epsilon,env.action_space.n) next_state,reward,done,_= env.step(action) new_value = update_q_sarsamax(Q,state,action,alpha,next_state,gamma,reward) Q[state][action] = new_value state = next_state if done is True: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_q_expected_sarsa(q,state,action,next_state,reward,gamma,alpha,epsilon): current = q[state][action] nA = env.action_space.n policy_s = (epsilon/nA) * np.ones(nA) policy_s[np.argmax(q[next_state])] = 1-epsilon + (epsilon/nA) qsa_next = np.dot(policy_s,q[next_state]) target = reward + gamma * qsa_next new_value = current + alpha * (target - current) return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 0.005 state = env.reset() while True: action = epsilon_greedy(Q,state,epsilon,env.action_space.n) next_state,reward,done,info= env.step(action) Q[state][action] = update_q_expected_sarsa(Q,state,action,next_state,reward,gamma,alpha,epsilon) state = next_state if done is True: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code """ both get_a_from_q functions work perfectly! get_a_from_q is much faster than get_a_from_q_back Reason: ======= in get_a_from_q_back we have 1 additional structure (prob) and 3 asignments! """ # get the action value according the epsilon-greedy policy def get_a_from_q_back(state, Q, nA, epsilon): """ select epsilon-greedy action for supplied state Params ====== state (int): current state Q (dictionary): action-value function nA (int): action space size = # of actions in the environment epsilon(float): epsilon Return ====== selected action """ # init an array of size of action space with the # probability values = epsilon / size of action space (nA) prob = np.ones(nA) * (epsilon / nA) # get the index of the best / heighest action value of state S best_a = np.argmax(Q[state]) # best action value get the probability 1 - epsilon + (epsilon / nA) prob[best_a] = 1 - epsilon + (epsilon / nA) # determin the action according the probability distribution for the given action values action = np.random.choice(np.arange(4), p = prob) return action def get_a_from_q(state, Q, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== state (int): current state Q (dictionary): action-value function nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start = 1.0, eps_decay = 0.9999, eps_min = 0.05): nA = env.action_space.n # initialize action-value function (empty dictionary of arrays) - initial value is zero Q = defaultdict(lambda: np.zeros(env.nA)) # init epsilon epsilon = eps_start # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # calculate epsilon epsilon = max(epsilon * eps_decay, eps_min) # epsilon = 1.0 / i_episode # epsilon = 0.1 # get starting state state = env.reset() # get action conditioned on the starting state action = get_a_from_q(state, Q, nA, epsilon) while True: # get the next step next_state, reward, done, info = env.step(action) # get action conditioned on state S from the Q-tabele by an epsilon greedy policy next_action = get_a_from_q(next_state, Q, nA, epsilon) # store the old state old_state_action_pair = Q[state][action] Qsa_next = Q[next_state][next_action] # construct TD target target = reward + (gamma* Qsa_next) # update the Q-table Q[state][action] = old_state_action_pair + (alpha * (target - old_state_action_pair)) state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start = 1.0, eps_decay = 0.9999, eps_min = 0.05): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) epsilon = eps_start # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # calculate epsilon # epsilon = 1.0 / i_episode epsilon = max(epsilon * eps_decay, eps_min) # epsilon = 0.1 state = env.reset() while True: action = get_a_from_q(state, Q, nA, epsilon) next_state, reward, done, info = env.step(action) old_state_action_pair = Q[state][action] Q[state][action] = old_state_action_pair + alpha*(reward + gamma * (np.max(Q[next_state])) - old_state_action_pair) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_expected_value(S, Q, nA, epsilon): # init an array of size of action space with the # probability values = epsilon / size of action space (nA) prob = np.ones(nA) * (epsilon / nA) # get the index of the best / heighest action value of state S best_a = np.argmax(Q[S]) # best action value get the probability 1 - epsilon + (epsilon / nA) prob[best_a ] = 1 - epsilon + (epsilon / nA) # determin the expected value for state S # a = np.dot(Q[S], prob) aa = Q[S] * prob a = np.sum(aa) return a def expected_sarsa(env, num_episodes, alpha, gamma=1.0): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # epsilon = 1.0 / i_episode epsilon = 0.005 state = env.reset() while True: action = get_a_from_q(state, Q, nA, epsilon) next_state, reward, done, info = env.step(action) old_qsa = Q[state][action] expectation = get_expected_value(next_state, Q, nA, epsilon) Q[state][action] = old_qsa + alpha * (reward + (gamma * expectation) - old_qsa) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q(Q, sarsa_data, alpha, gamma): state_t, action_t, reward_t, state_tt, action_tt = sarsa_data if action_tt is None: current_return_estimate = 0 else: current_return_estimate = Q[state_tt][action_tt] Q[state_t][action_t] += alpha*(reward_t + gamma*current_return_estimate - Q[state_t][action_t]) return Q def epsilon_greedy_policy(Q, state, epsilon, nA): if state in Q.keys(): act_prob = np.repeat(epsilon/nA, nA) p_optimal = 1 - epsilon + epsilon/nA act_prob[np.argmax(Q[state])] = p_optimal action = np.random.choice(nA, p=act_prob) else: action = np.random.randint(nA) return action def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) nA = env.nA episode_return = 0 # initialize performance monitor temp_list = [] ave_return = [] # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: ave_score = np.array(temp_list).mean() ave_return.append(ave_score) temp_list = [] print("\rEpisode {}/{}: | ave_return: {}".format(i_episode, num_episodes, ave_score), end="") sys.stdout.flush() ## TODO: complete the function episode_return = 0 state_t = env.reset() epsilon = max(1.0 / i_episode, 0.01) while True: action_t = epsilon_greedy_policy(Q, state_t, epsilon, nA) state_tt, reward_t, done, info = env.step(action_t) episode_return += reward_t if done: sarsa_data = (state_t, action_t, reward_t, state_tt, None) Q = update_Q(Q, sarsa_data, alpha, gamma) temp_list.append(episode_return) break else: action_tt = epsilon_greedy_policy(Q, state_tt, epsilon, nA) sarsa_data = (state_t, action_t, reward_t, state_tt, action_tt) Q = update_Q(Q, sarsa_data, alpha, gamma) action_t = action_tt state_t = state_tt # plot performance plt.plot(np.linspace(0,num_episodes,len(ave_return),endpoint=False), np.asarray(ave_return)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % 100) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % 100), np.max(ave_return)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000: | ave_return: -17.389494949495 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_ql(Q, sars_data, alpha, gamma): state_t, action_t, reward_t, state_tt = sars_data if state_tt is None: current_return_estimate = 0 else: current_return_estimate = Q[state_tt][np.argmax(Q[state_tt])] Q[state_t][action_t] += alpha*(reward_t + gamma*current_return_estimate - Q[state_t][action_t]) return Q def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) nA = env.nA episode_return = 0 # initialize performance monitor temp_list = [] ave_return = [] # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: ave_score = np.array(temp_list).mean() ave_return.append(ave_score) temp_list = [] print("\rEpisode {}/{}: | ave_return: {}".format(i_episode, num_episodes, ave_score), end="") sys.stdout.flush() ## TODO: complete the function episode_return = 0 state_t = env.reset() epsilon = max(1.0 / i_episode, 0.001) while True: action_t = epsilon_greedy_policy(Q, state_t, epsilon, nA) state_tt, reward_t, done, info = env.step(action_t) episode_return += reward_t if done: sars_data = (state_t, action_t, reward_t, None) Q = update_Q_ql(Q, sars_data, alpha, gamma) temp_list.append(episode_return) break else: sars_data = (state_t, action_t, reward_t, state_tt) Q = update_Q_ql(Q, sars_data, alpha, gamma) state_t = state_tt # plot performance plt.plot(np.linspace(0,num_episodes,len(ave_return),endpoint=False), np.asarray(ave_return)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % 100) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % 100), np.max(ave_return)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000: | ave_return: -14.0871717171717 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def action_prob_ep_greedy_policy(Q, state, epsilon, nA): if state in Q.keys(): act_prob = np.repeat(epsilon/nA, nA) p_optimal = 1 - epsilon + epsilon/nA act_prob[np.argmax(Q[state])] = p_optimal else: act_prob = np.repeat(1/nA, nA) return act_prob def update_Q_es(Q, sars_data, alpha, gamma, nA, action_tt_prob): state_t, action_t, reward_t, state_tt = sars_data if state_tt is None: current_return_estimate = 0 else: current_return_estimate = np.dot(Q[state_tt], action_tt_prob) Q[state_t][action_t] += alpha*(reward_t + gamma*current_return_estimate - Q[state_t][action_t]) return Q def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) nA = env.nA episode_return = 0 epsilon = 1 # initialize performance monitor temp_list = [] ave_return = [] # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: ave_score = np.array(temp_list).mean() ave_return.append(ave_score) temp_list = [] print("\rEpisode {}/{}: | ave_return: {}".format(i_episode, num_episodes, ave_score), end="") sys.stdout.flush() ## TODO: complete the function episode_return = 0 state_t = env.reset() epsilon = 0.005 # max(0.999*epsilon, 0.001) while True: act_prob = action_prob_ep_greedy_policy(Q, state_t, epsilon, nA) action_t = np.random.choice(nA, p=act_prob) state_tt, reward_t, done, info = env.step(action_t) episode_return += reward_t if done: sars_data = (state_t, action_t, reward_t, None) Q = update_Q_es(Q, sars_data, alpha, gamma, nA, None) temp_list.append(episode_return) break else: sars_data = (state_t, action_t, reward_t, state_tt) act_prob = action_prob_ep_greedy_policy(Q, state_tt, epsilon, nA) Q = update_Q_es(Q, sars_data, alpha, gamma, nA, act_prob) state_t = state_tt # plot performance plt.plot(np.linspace(0,num_episodes,len(ave_return),endpoint=False), np.asarray(ave_return)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % 100) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % 100), np.max(ave_return)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000: | ave_return: -14.1630303030305 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random import math from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state # S <- S' action = next_action # A <- A' if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step # Qsa_next = Q[next_state][next_action] if next_state is not None else 0 Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 # value of next state target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state # S <- S' action = next_action # A <- A' if done: Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) policy_s = np.ones(nA) * eps / nA # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step target = reward + (gamma * Qsa_next) # construct target new_value = current + (alpha * (target - current)) # get updated value return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Expected SARSA - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): step-size parameters for the update step gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 0.005 # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score # update Q Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) 36 ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output C:\Users\anu\Anaconda3\envs\drlnd\lib\site-packages\matplotlib\cbook\__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current_value = Q[state][action] Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) new_value = current_value + alpha * (target - current_value) return new_value def epsilon_greedy(Q, state, nA, eps): if random.random() > eps: return np.max(Q[state]) else: return random.choice(np.arange(env.action_space.nA)) def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 state = env.reset() eps = 1.0 / i_episode action = epsilon_greedy(Q, state, nA, eps) while True: next_state, reward, done, info = env.step() score += reward if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state, next_action) state = next_state action = next_action if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state, next_action) tmp_scores.append(score) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt import time %matplotlib inline import check_test from plot_utils import plot_values from IPython.display import clear_output ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code for i in range(3): state = env.reset() while True: env.render() action = env.action_space.sample() next_state, reward, done, info = env.step(action) clear_output(wait=True) time.sleep(0.2) if done is True: break print(env.action_space) print(env.observation_space) ###Output _____no_output_____ ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\cbook\deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_probs(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s epsilon = 0 def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99, eps_min=0.09): global epsilon # initialize action-value function (empty dictionary of arrays) nA = env.nA Q = defaultdict(lambda: np.zeros(env.nA)) epsilon = eps_start # initialize performance monitor # loop over episodes state = env.reset() for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = max(epsilon*eps_decay, eps_min) action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() next_state, reward, done, info = env.step(action) if done: state = env.reset() continue next_action = np.random.choice(np.arange(nA), p=get_probs(Q[next_state], epsilon, nA)) \ if state in Q else env.action_space.sample() changes = alpha*(reward + gamma*Q[next_state][next_action] - Q[state][action]) Q[state][action] = Q[state][action] + changes state = next_state #print(Q[state][action]) ## TODO: complete the function return Q sum(get_probs(Q_sarsa[32], 0.4, 4)) ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 50000, .09) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 50000/50000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.999, eps_min=0.01): # initialize empty dictionary of arrays nA = env.nA Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes state = env.reset() epsilon = eps_start for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = max(epsilon*eps_decay, eps_min) action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() next_state, reward, done, info = env.step(action) if done: state = env.reset() continue Q[state][action] = Q[state][action] + alpha*(reward + gamma*Q[next_state][np.argmax(Q[next_state])] - Q[state][action]) state = next_state ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 50000, .1) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 50000/50000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.999, eps_min=0.01): # initialize empty dictionary of arrays nA = env.nA Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes state = env.reset() epsilon = eps_start for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = max(epsilon*eps_decay, eps_min) action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() next_state, reward, done, info = env.step(action) if done: state = env.reset() continue Q[state][action] = Q[state][action] + alpha*(reward + gamma*sum(Q[next_state]*get_probs(Q[next_state], epsilon, nA)) - Q[state][action]) state = next_state ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, .1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function. ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy_probs(env, Q_s, i_episode, eps=None): epsilon = 1 / i_episode if eps is not None: epsilon = eps policy_s = np.ones(env.nA) * epsilon / env.nA policy_s[np.argmax(Q_s)] = 1 - epsilon + epsilon / env.nA return policy_s def update_Q(Qsa, Qsa_next, reward, alpha, gamma): return Qsa + alpha * (reward + gamma * Qsa_next - Qsa) def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) plot_every = 100 tmp_scores = [] scores = [] # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() policy_s = epsilon_greedy_probs(env, Q[state], i_episode) action = np.random.choice(np.arange(env.nA), p=policy_s) for t_step in np.arange(300): next_state, reward, done, info = env.step(action) score += reward if not done: policy_s = epsilon_greedy_probs(env, Q[next_state], i_episode) next_action = np.random.choice(np.arange(env.nA), p=policy_s) Q[state][action] = update_Q(Q[state][action], Q[next_state][next_action], reward, alpha, gamma) state = next_state action = next_action if done: Q[state][action] = update_Q(Q[state][action], 0, reward, alpha, gamma) tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) plot_every = 100 tmp_scores = [] scores = [] # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() policy_s = epsilon_greedy_probs(env, Q[state], i_episode) action = np.random.choice(np.arange(env.nA), p=policy_s) for t_step in np.arange(300): next_state, reward, done, info = env.step(action) score += reward if not done: policy_s = epsilon_greedy_probs(env, Q[next_state], i_episode) next_action = np.random.choice(np.arange(env.nA), p=policy_s) Q[state][action] = update_Q(Q[state][action], np.max(Q[next_state]), reward, alpha, gamma) state = next_state action = next_action if done: Q[state][action] = update_Q(Q[state][action], 0, reward, alpha, gamma) tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): Q = defaultdict(lambda: np.zeros(env.nA)) plot_every = 100 tmp_scores = [] scores = [] for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # begin an episode state = env.reset() # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[state], i_episode) while True: # pick next action action = np.random.choice(np.arange(env.nA), p=policy_s) # take action A, observe R, S' next_state, reward, done, info = env.step(action) # add reward to score score += reward # get epsilon-greedy action probabilities (for S') policy_s = epsilon_greedy_probs(env, Q[next_state], i_episode) # update Q Q[state][action] = update_Q(Q[state][action], np.dot(Q[next_state], policy_s), \ reward, alpha, gamma) # S <- S' state = next_state # until S is terminal if done: # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 0.5) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code V_opt[:10][1] # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_scheduler(num_episodes_total, num_episodes_so_far): # epsilon = 1 - num_episodes_so_far/num_episodes_total # return epsilon if epsilon > 0.1 else 0.1 return 1 / num_episodes_so_far def greedify(Q, action_space): # could have just extracted policy after episode loop, probably faster that way policy = defaultdict(lambda: action_space.sample()) for s in Q: policy[s] = np.argmax(Q[s]) return policy def get_epsilon_greedy_action(policy, state, epsilon, action_space): return policy[state] if np.random.uniform() > epsilon else action_space.sample() def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = epsilon_scheduler(num_episodes, i_episode) state = env.reset() policy = greedify(Q, env.action_space) action = get_epsilon_greedy_action(policy, state, epsilon, env.action_space) done = False while not done: next_state, next_reward, done, info = env.step(action) policy = greedify(Q, env.action_space) next_action = get_epsilon_greedy_action(policy, next_state, epsilon, env.action_space) Q[state][action] += alpha * (next_reward + gamma * Q[next_state][next_action] - Q[state][action]) state, action = next_state, next_action return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = epsilon_scheduler(num_episodes, i_episode) state = env.reset() done = False while not done: policy = greedify(Q, env.action_space) action = get_epsilon_greedy_action(policy, state, epsilon, env.action_space) next_state, next_reward, done, info = env.step(action) Q[state][action] += alpha * (next_reward + gamma * np.max(Q[next_state]) - Q[state][action]) state = next_state return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 0.005 # epsilon_scheduler(num_episodes, i_episode) state = env.reset() action_prob_non_greedy = epsilon / env.nA action_prob_greedy = 1 - epsilon + action_prob_non_greedy done = False while not done: policy = greedify(Q, env.action_space) action = get_epsilon_greedy_action(policy, state, epsilon, env.action_space) next_state, next_reward, done, info = env.step(action) expected_Q = 0 greedy_action = np.argmax(Q[next_state]) for a, q in enumerate(Q[next_state]): expected_Q += q * action_prob_greedy if a == greedy_action else q * action_prob_non_greedy Q[state][action] += alpha * (next_reward + gamma * expected_Q - Q[state][action]) state = next_state return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /home/tk/miniconda/envs/drl_gpu/lib/python3.6/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " /home/tk/miniconda/envs/drl_gpu/lib/python3.6/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " /home/tk/miniconda/envs/drl_gpu/lib/python3.6/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " /home/tk/miniconda/envs/drl_gpu/lib/python3.6/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy_action(epsilon, action_values): epsilon_greedy_action.possible_actions[0] = np.argmax(action_values) action_probs = np.concatenate(([1-epsilon], np.full(env.nA, epsilon/env.nA))) return np.random.choice(epsilon_greedy_action.possible_actions, p=action_probs) epsilon_greedy_action.possible_actions = np.concatenate(([0], np.arange(env.nA))) def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 1.0/i_episode done = False state = env.reset() action = epsilon_greedy_action(epsilon, Q[state]) while not done: state_next, reward, done, info = env.step(action) action_next = epsilon_greedy_action(epsilon, Q[state_next]) G = reward + gamma*Q[state_next][action_next] Q[state][action] = (1-alpha)*Q[state][action] + alpha*G state, action = state_next, action_next return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 1.0/i_episode done = False state = env.reset() action = epsilon_greedy_action(epsilon, Q[state]) while not done: state_next, reward, done, info = env.step(action) action_next = epsilon_greedy_action(epsilon, Q[state_next]) G = reward + gamma*max(Q[state_next]) Q[state][action] = (1-alpha)*Q[state][action] + alpha*G state, action = state_next, action_next return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy_action_expected_reward(epsilon, action_values): best_action = epsilon_greedy_action_expected_reward.possible_actions[0] = np.argmax(action_values) action_probs = np.concatenate(([1-epsilon], np.full(env.nA, epsilon/env.nA))) return (np.random.choice(epsilon_greedy_action_expected_reward.possible_actions, p=action_probs), (1-epsilon)*action_values[best_action]+epsilon*np.mean(action_values)) epsilon_greedy_action_expected_reward.possible_actions = np.concatenate(([0], np.arange(env.nA))) def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 0.005 done = False state = env.reset() action = epsilon_greedy_action(epsilon, Q[state]) while not done: state_next, reward, done, info = env.step(action) action_next, exp_reward = epsilon_greedy_action_expected_reward(epsilon, Q[state_next]) G = reward + gamma*exp_reward Q[state][action] = (1-alpha)*Q[state][action] + alpha*G state, action = state_next, action_next return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 20000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 20000/20000 ###Markdown Table of Contents1&nbsp;&nbsp;Temporal-Difference Methods1.0.1&nbsp;&nbsp;Part 0: Explore CliffWalkingEnv1.0.2&nbsp;&nbsp;Part 1: TD Control: Sarsa1.0.3&nbsp;&nbsp;Part 2: TD Control: Q-learning1.0.4&nbsp;&nbsp;Part 3: TD Control: Expected Sarsa Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(Q, state, eps): """ """ if np.random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return np.random.choice(np.arange(env.action_space.n)) def generate_episode_from_policy_sarsa(env, Q, current_episode, alpha=0.2, gamma=1): ''' Generates an episode following the inputed policy (dictionnary where each key is a possible state). It will also perform the SARSA temporal difference and update the Q-table while the episode unfolds. ''' episode = [] state = env.reset() reward = None;action = None eps = 1.0 while True: action = epsilon_greedy(Q, state, eps/current_episode) # reward is none only when action is A0 and state is S0 (i.e the first iteration of the loop) if (reward is not None): #Q[St][At] = Q[St][At] + alpha * (reward + gamma * Q[St+1][At+1] - Q[St][At]) (old_state, old_action, _) = episode[-1] Q[old_state][old_action] = Q[old_state][old_action] + alpha*(reward+gamma*Q[state][action] - Q[old_state][old_action]) # St+1, Rt+1 next_state, reward, done, info = env.step(action) # (St, At, Rt+1) episode.append((state, action, reward)) state = next_state if done: break (old_state, old_action, reward) = episode[-1] Q[old_state][old_action] = Q[old_state][old_action] + alpha*(reward - Q[old_state][old_action]) #policy = epsilon_greedy(Q, current_episode, total_episodes, env) return episode, Q def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() _, Q = generate_episode_from_policy_sarsa(env, Q, i_episode, alpha, gamma) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(Q, state, eps): """ """ if np.random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return np.random.choice(np.arange(env.action_space.n)) def generate_episode_from_policy_q_learning(env, Q, current_episode, alpha=0.2, gamma=1): ''' Generates an episode following the inputed policy (dictionnary where each key is a possible state). It will also perform the Q-Learning temporal difference and update the Q-table while the episode unfolds. ''' episode = [] state = env.reset() reward = None;action = None eps = 1.0 while True: action = epsilon_greedy(Q, state, eps/current_episode) # reward is none only when action is A0 and state is S0 (i.e the first iteration of the loop) if (reward is not None): #Q[St][At] = Q[St][At] + alpha * (reward + gamma * max(Q[St+1]) - Q[St][At]) (old_state, old_action, _) = episode[-1] Q[old_state][old_action] = Q[old_state][old_action] + alpha*(reward+gamma*np.max(Q[state]) - Q[old_state][old_action]) # St+1, Rt+1 next_state, reward, done, info = env.step(action) # (St, At, Rt+1) episode.append((state, action, reward)) state = next_state if done: break (old_state, old_action, reward) = episode[-1] Q[old_state][old_action] = Q[old_state][old_action] + alpha*(reward - Q[old_state][old_action]) #policy = epsilon_greedy(Q, current_episode, total_episodes, env) return episode, Q def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function _, Q = generate_episode_from_policy_q_learning(env, Q, i_episode, alpha, gamma) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(Q, state, eps): """ """ if np.random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return np.random.choice(np.arange(env.action_space.n)) def generate_episode_from_policy_expected_sarsa(env, Q, current_episode, alpha=0.2, gamma=1): ''' Generates an episode following the inputed policy (dictionnary where each key is a possible state). It will also perform the Expected SARSA temporal difference and update the Q-table while the episode unfolds. ''' episode = [] state = env.reset() reward = None;action = None eps = 0.05 while True: action = epsilon_greedy(Q, state, eps/current_episode) # reward is none only when action is A0 and state is S0 (i.e the first iteration of the loop) if (reward is not None): #Q[St][At] = Q[St][At] + alpha * (reward + gamma * average(Q[St+1]) - Q[St][At]) greedy_prob = 1 - eps/current_episode # we defined the probabilities of the actions to be chosen: the base value is the probability for the non-greedy actions to be selected probs = np.ones(env.action_space.n)* (1 - greedy_prob)/(env.action_space.n-1) # updating the greedy action value probs[np.argmax(Q[state])] = greedy_prob (old_state, old_action, _) = episode[-1] Q[old_state][old_action] = Q[old_state][old_action] + alpha*(reward+gamma*np.dot(Q[state], probs) - Q[old_state][old_action]) # St+1, Rt+1 next_state, reward, done, info = env.step(action) # (St, At, Rt+1) episode.append((state, action, reward)) state = next_state if done: break (old_state, old_action, reward) = episode[-1] Q[old_state][old_action] = Q[old_state][old_action] + alpha*(reward - Q[old_state][old_action]) #policy = epsilon_greedy(Q, current_episode, total_episodes, env) return episode, Q def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function _, Q = generate_episode_from_policy_expected_sarsa(env, Q, i_episode, alpha, gamma) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function. ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code # Note that the given 1/i_iteration decay seems to be too steep ! # it could lead to lack of exploration, converging to greedy policy too soon. s2 = np.array([max(1.0/i, 0.05) for i in range(1,5001)]) plt.plot(np.linspace(0,5000,len(s2)), s2) plt.show() # So, let's go slower, 0.999 decay s = np.array([max(0.999**i, 0.05) for i in range(1,5001)]) plt.plot(np.linspace(0,5000,len(s)), s) plt.show() def get_prob(env, Q_s, i_episode): ##### DECAY TEST ## test given decay rate # epsilon = 1.0 / i_episode ## test fixed epsilon # epsilon = 0.1 ## test lower decay rate epsilon = max(0.999**i_episode, 0.05) #### policy = np.ones(env.nA) * epsilon / env.nA best_a = np.argmax(Q_s) policy[best_a] = (1- epsilon)+(epsilon/env.nA) return policy def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() # Make init. policiy with init. action values policy_s = get_prob(env, Q[state], i_episode) # Take init action with init policy action = np.random.choice(np.arange(env.nA), p = policy_s) score = 0 while True: # Get reward_t+1, state_t+1 new_state, reward, done, info = env.step(action) score += reward if not done: # Make the new policy with current action values policy_s = get_prob(env, Q[new_state], i_episode) # Take action_t+1 with the policy new_action = np.random.choice(np.arange(env.nA), p = policy_s) # Update Q table Q[state][action] = Q[state][action] \ + alpha*(reward + gamma*Q[new_state][new_action] - Q[state][action]) # Take time step state = new_state action = new_action if done: Q[state][action] = Q[state][action] + alpha*(reward + 0 - Q[state][action]) tmp_scores.append(score) break; if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_prob(env, Q_s, i_episode): #epsilon = 1.0 / i_episode epsilon = max(0.999**i_episode, 0.05) policy = np.ones(env.nA) * epsilon / env.nA best_a = np.argmax(Q_s) policy[best_a] = (1- epsilon)+(epsilon/env.nA) return policy def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() score = 0 while True: policy_s = get_prob(env, Q[state], i_episode) action = np.random.choice(np.arange(env.nA), p = policy_s) new_state, reward, done, info = env.step(action) score += reward Q[state][action] = Q[state][action] + alpha*(reward + gamma*np.max(Q[new_state]) - Q[state][action]) state = new_state if done: tmp_scores.append(score) break; if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_prob(env, Q_s, i_episode): #epsilon = max(1.0 / i_episode, 0.1) epsilon = max(0.999**i_episode, 0.05) policy = np.ones(env.nA) * epsilon / env.nA best_a = np.argmax(Q_s) policy[best_a] = (1- epsilon)+(epsilon/env.nA) return policy def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() policy_s = get_prob(env, Q[state], i_episode) score = 0 while True: action = np.random.choice(np.arange(env.nA), p = policy_s) new_state, reward, done, info = env.step(action) score += reward policy_s = get_prob(env, Q[new_state], i_episode) Q[state][action] = Q[state][action] +\ alpha*(reward + gamma*np.dot(Q[new_state], policy_s) - Q[state][action]) state = new_state if done: tmp_scores.append(score) break; if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) def epsilon_greedy_policy(Q,s,nA,epsilon): if np.random.random() > epsilon: return np.argmax(Q[s]) else: return np.random.choice(np.arange(nA)) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes nA = env.action_space.n for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() state = env.reset() eps = 1/i_episode action = epsilon_greedy_policy(Q, state, nA, eps) while True: next_state, reward, done, _ = env.step(action) next_action = epsilon_greedy_policy(Q, next_state , nA , eps) Q[state][action] += alpha*(reward + gamma * Q[next_state][next_action] - Q[state][action]) state, action = next_state,next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() state = env.reset() eps = 1/i_episode while True: action = epsilon_greedy_policy(Q, state, env.action_space.n, eps) next_state,reward,done,_ = env.step(action) Q[state][action] += alpha*(reward + gamma * np.max(Q[next_state])- Q[state][action]) if done: break state = next_state ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) policy = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() state = env.reset() eps = 1/num_episodes while True: action = epsilon_greedy_policy(Q, state, env.action_space.n, eps) next_state,reward,done,_ = env.step(action) policy = np.ones(env.action_space.n) * (eps/(env.action_space.n)) policy[np.argmax(Q[next_state])] = 1 - eps + eps/(env.action_space.n) target = reward + gamma * np.dot(policy,Q[next_state]) Q[state][action] += alpha * (target - Q[state][action]) if done: break state = next_state ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # initialize performance monitor tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state # S <- S' action = next_action # A <- A' if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 # value of next state target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Q-Learning - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): learning rate gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) policy_s = np.ones(nA) * eps / nA # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step target = reward + (gamma * Qsa_next) # construct target new_value = current + (alpha * (target - current)) # get updated value return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Expected SARSA - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): step-size parameters for the update step gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 0.005 # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score # update Q Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output C:\ProgramData\Anaconda3\lib\site-packages\numpy\_distributor_init.py:32: UserWarning: loaded more than 1 DLL from .libs: C:\ProgramData\Anaconda3\lib\site-packages\numpy\.libs\libopenblas.TXA6YQSD3GCQQC22GEQ54J2UDCXDXHWN.gfortran-win_amd64.dll C:\ProgramData\Anaconda3\lib\site-packages\numpy\.libs\libopenblas.XWYDX2IKJW2NMTWSFYNGFUWKQU3LYTCZ.gfortran-win_amd64.dll stacklevel=1) ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = 1.0 / i_episode state=env.reset() action = np.argmax(Q[state]) if np.random.rand()>eps else \ np.random.choice(np.arange(env.action_space.n)) while True: next_state, reward, done, info = env.step(action) if not done: next_action = np.argmax(Q[next_state]) if random.random()>eps else \ np.random.choice(np.arange(env.action_space.n)) Q[state][action]+=alpha*((reward+gamma*Q[next_state][next_action] if \ next_state is not None else 0)-Q[state][action]) state=next_state action=next_action if done: Q[state][action]+=alpha*((reward+gamma*0)-Q[state][action]) break return Q # sarsa(env, 200, .01) ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps= 1/i_episode state= env.reset() while True: action= Q[state].argmax() if np.random.randn()>eps else \ np.random.choice(np.arange(env.action_space.n)) next_state, reward, done, info = env.step(action) if not done: Q[state][action]+=alpha*((reward+gamma*Q[next_state].max())-Q[state][action]) state=next_state if done: Q[state][action]+=alpha*((reward+gamma*0)-Q[state][action]) break return Q # q_learning(env, 200, .01) ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code 3 if 1 is not 0 else 1 if 1 is not None else 0 def epsilon_greedy_action(Q, state, eps, nA): return Q[state].argmax() if np.random.randn()>eps else np.random.choice(np.arange(nA)) def update_Q_expect_sarsa(Q, state, action, reward, gamma, alpha, eps, nA, next_state=None): policy_s =np.ones(nA)*eps/nA if next_state is not None: policy_s[Q[next_state].argmax()] = 1-eps+(eps/nA) return Q[state][action]+alpha*((reward+gamma*(np.dot(Q[next_state],policy_s) if next_state is not None else 0))-Q[state][action]) def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps=1/i_episode state=env.reset() while True: action=epsilon_greedy_action(Q, state, eps, env.action_space.n) next_state, reward, done, info=env.step(action) if not done: Q[state][action]=update_Q_expect_sarsa(Q, state, action, reward, gamma, alpha, eps, env.action_space.n, next_state) state=next_state if done: Q[state][action]=update_Q_expect_sarsa(Q, state, action, reward, gamma, alpha, eps, env.action_space.n) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def epsilon_greedy(Q, state, nA, epsilon): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > epsilon: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(nA)) def update_Q_sarsa(env, Q, alpha, gamma, epsilon): state = env.reset() while True: action = epsilon_greedy(Q, state, env.nA, epsilon) next_state, reward, done, info = env.step(action) next_action = epsilon_greedy(Q, next_state, env.nA, epsilon) if done: Q[state][action] += alpha * (reward - Q[state][action]) break else: Q[state][action] += alpha * (reward + gamma * Q[next_state][next_action] - Q[state][action]) state = next_state return Q def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 1.0/i_episode Q = update_Q_sarsa(env, Q, alpha, gamma, epsilon) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 10000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 10000/10000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_learning(env, Q, alpha, gamma, epsilon): state = env.reset() while True: action = epsilon_greedy(Q, state, env.nA, epsilon) next_state, reward, done, info = env.step(action) if done: Q[state][action] += alpha * (reward - Q[state][action]) break else: Q[state][action] += alpha * (reward + gamma * Q[next_state][np.argmax(Q[next_state])] - Q[state][action]) state = next_state return Q def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 1.0/i_episode Q = update_Q_learning(env, Q, alpha, gamma, epsilon) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expected(env, Q, alpha, gamma, epsilon): state = env.reset() while True: action = epsilon_greedy(Q, state, env.nA, epsilon) next_state, reward, done, info = env.step(action) probs = np.ones(env.nA) * epsilon/env.nA probs[np.argmax(Q[next_state])] += 1-epsilon Q[state][action] += alpha * (reward + gamma * np.dot(probs, Q[next_state]) - Q[state][action]) state = next_state if done: break return Q def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes epsilon = 0.005 for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # epsilon = 1.0/i_episode Q = update_Q_expected(env, Q, alpha, gamma, epsilon) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt import random %matplotlib inline import check_test from plot_utils import plot_values ###Output /usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88 return f(*args, **kwds) ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output /Users/ehu/Projects/gym/gym/__init__.py:22: UserWarning: DEPRECATION WARNING: to improve load times, gym no longer automatically loads gym.spaces. Please run "import gym.spaces" to load gym.spaces on your own. This warning will turn into an error in a future version of gym. warnings.warn('DEPRECATION WARNING: to improve load times, gym no longer automatically loads gym.spaces. Please run "import gym.spaces" to load gym.spaces on your own. This warning will turn into an error in a future version of gym.') ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /Users/ehu/Projects/deep-reinforcement-learning/venv/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] Q_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Q_next) Q[state][action] += alpha * (target - Q[state][action]) def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ return np.argmax(Q[state]) if random.random() >= eps else random.choice(range(nA)) def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = 1.0 / i_episode state = env.reset() action = epsilon_greedy(Q, state, env.nA, eps) while True: next_state, reward, done, info = env.step(action) if not done: next_action = epsilon_greedy(Q, next_state, env.nA, eps) update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state, next_action) state = next_state action = next_action if done: update_Q_sarsa(alpha, gamma, Q, state, action, reward) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] Q_next = np.max(Q[next_state]) if next_state is not None else 0 target = reward + (gamma * Q_next) Q[state][action] += alpha * (target - Q[state][action]) def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = 1.0 / i_episode state = env.reset() while True: action = epsilon_greedy(Q, state, env.nA, eps) next_state, reward, done, info = env.step(action) if not done: update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state) state = next_state if done: update_Q_sarsamax(alpha, gamma, Q, state, action, reward) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_exp_sarsa(alpha, gamma, Q, state, action, reward, eps, next_state=None): def action_prob(action, state): """Returns pi(a|s) where pi is an e-greedy policy""" if action == np.argmax(Q[state]): return 1 - eps + eps * (1/env.nA) return eps * 1/env.nA """Returns updated Q-value for the most recent experience.""" current = Q[state][action] policy_s = np.ones(env.nA) * eps/env.nA policy_s[np.argmax(Q[next_state])] += 1 - eps assert Q[next_state].shape == policy_s.shape Q_next = Q[next_state].dot(policy_s) target = reward + (gamma * Q_next) Q[state][action] += alpha * (target - Q[state][action]) def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = .005 state = env.reset() while True: action = epsilon_greedy(Q, state, env.nA, eps) next_state, reward, done, info = env.step(action) if not done: update_Q_exp_sarsa(alpha, gamma, Q, state, action, reward, eps, next_state) state = next_state if done: update_Q_exp_sarsa(alpha, gamma, Q, state, action, eps, reward) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output _____no_output_____ ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values random.seed(514) ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code from typing import Dict, List def get_probs(action_id: int, nA: int, eps: float): return [ 1 - eps + eps / nA if idx == action_id else eps / nA for idx in range(nA) ] def get_next_action(Q: Dict[int, np.ndarray], state: int, nA: int, eps: float): action_id_with_most_reward = np.argmax(Q[state]) return np.random.choice( np.arange(nA), p=get_probs(action_id_with_most_reward, nA, eps), ) def sarsa( env, num_episodes: int, alpha: float, gamma: float=1.0, eps_start: float=1.0, eps_decay: float=.99999, eps_min: float=0.05, plot_every: int=100, ): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(nA)) eps = eps_start # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initial state score = 0 state = env.reset() eps = 0.005 # critical: somehow decayed eps doesn't work # eps = 1.0 / i_episode # critical: somehow decayed eps doesn't work, so overwrite it action = get_next_action(Q, state, nA, eps) while True: # simulate the next step next_state, reward, done, info = env.step(action) score += reward # find the best next action next_action = get_next_action(Q, next_state, nA, eps) # update Q-table new_value = reward + gamma * Q[next_state][next_action] Q[state][action] = (1 - alpha) * Q[state][action] + alpha * new_value if not done: state = next_state action = next_action else: new_value = reward tmp_scores.append(score) break # update eps eps = max(eps * eps_decay, eps_min) if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning( env, num_episodes: int, alpha: float, gamma: float=1.0, eps_start: float=1.0, eps_decay: float=.99999, eps_min: float=0.05, plot_every: int=100, ): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(nA)) eps = eps_start # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initial state score = 0 state = env.reset() eps = 0.005 # critical: somehow decayed eps doesn't work # eps = 1.0 / i_episode # critical: somehow decayed eps doesn't work, so overwrite it action = get_next_action(Q, state, nA, eps) while True: # simulate the next step next_state, reward, done, info = env.step(action) score += reward # find the best next action next_action = get_next_action(Q, next_state, nA, eps) # update Q-table new_value = reward + gamma * np.max(Q[next_state]) Q[state][action] = (1 - alpha) * Q[state][action] + alpha * new_value if not done: state = next_state action = next_action else: tmp_scores.append(score) break # update eps eps = max(eps * eps_decay, eps_min) if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa( env, num_episodes: int, alpha: float, gamma: float=1.0, eps_start: float=1.0, eps_decay: float=.99999, eps_min: float=0.05, plot_every: int=100, ): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(nA)) eps = eps_start # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initial state score = 0 state = env.reset() eps = 0.005 # critical: somehow decayed eps doesn't work # eps = 1.0 / i_episode # critical: somehow decayed eps doesn't work action = get_next_action(Q, state, nA, eps) while True: # simulate the next step next_state, reward, done, info = env.step(action) score += reward # find the best next action next_action = get_next_action(Q, next_state, nA, eps) # calculate policy probs action_id_with_most_reward = np.argmax(Q[next_state]) policy_probs = get_probs(action_id_with_most_reward, nA, eps) expected_next_reward = np.sum([ policy_prob * next_reward for policy_prob, next_reward in zip(policy_probs, Q[next_state]) ]) # update Q-table new_value = reward + gamma * expected_next_reward Q[state][action] = (1 - alpha) * Q[state][action] + alpha * new_value if not done: state = next_state action = next_action else: tmp_scores.append(score) break # update eps eps = max(eps * eps_decay, eps_min) if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) state = env.reset() print(state) state = env.step(0) state = env.step(1) state = env.step(1) print(state) state = env.step(2) print(state) ###Output Discrete(4) Discrete(48) 36 (26, -1, False, {'prob': 1.0}) (36, -100, False, {'prob': 1.0}) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def GLIE(convergence_iters, iter_num, final_val): if iter_num <= convergence_iters: epsilon = (((final_val - 1)/convergence_iters)*iter_num) + 1 else: epsilon = final_val return epsilon def GLIE_asymp(iter_num): return 1/iter_num def epsilon_greedy(Qs, epsilon): policy_s = epsilon * np.ones(Qs.shape[0])/Qs.shape[0] max_index = np.argmax(Qs) policy_s[max_index] = 1 - epsilon + (epsilon/Qs.shape[0]) return policy_s def quick_epsilon_greedy(env, Qs, epsilon): if random.random() > epsilon: # select greedy action with probability epsilon return np.argmax(Qs) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def choose_At1_from_Q(env, state, Q, epsilon): if state in Q: probs = epsilon_greedy(Q[state], epsilon) action = np.random.choice(np.arange(env.nA), p=probs) else: action = env.action_space.sample() return action def quick_choose_At1_from_Q(env, state, Q, epsilon): action = quick_epsilon_greedy(env, Q[state], epsilon) return action def update_Q_sarsa(Q, alpha, gamma, st, at, rt_1, st_1 = None, at_1 = None): if st_1 == None: Q[st][at] = Q[st][at] + alpha*(rt_1 - Q[st][at]) else: Q[st][at] = Q[st][at] + alpha*((rt_1 + (gamma*Q[st_1][at_1])) - Q[st][at]) return Q def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes convergence_iters = int(num_episodes/3) temp_scores = deque(maxlen=num_episodes) avg_scores = deque(maxlen=num_episodes) for i_episode in range(1, num_episodes+1): # monitor progress #epsilon = GLIE(convergence_iters, i_episode, 1/num_episodes) #epsilon = GLIE_asymp(i_episode) epsilon = 0.1 if i_episode % 100 == 0: print("\rEpisode {}/{} epsilon = {} .".format(i_episode, num_episodes, epsilon), end="") sys.stdout.flush() ## TODO: complete the function st = env.reset() at = choose_At1_from_Q(env, st, Q, epsilon) index = 0 episode_score = 0 while True: #print("\rindex: {} ".format(index), end="") #sys.stdout.flush() # Take action At, observe (Rt+1, St+1) #print("st = " + str(st) + " at = " + str(at)) st_1, rt_1, done, info = env.step(at) episode_score += rt_1 if done: #print(" Episode length: " + str(index)) temp_scores.append(episode_score) Q = update_Q_sarsa(Q, alpha, gamma, st, at, rt_1) break #print("st_1 = " + str(st_1) + " rt_1 = " + str(rt_1) + "done: " + str(done)) # Choose action At+1 at_1 = quick_choose_At1_from_Q(env, st_1, Q, epsilon) # Update action value Q = update_Q_sarsa(Q, alpha, gamma, st, at, rt_1, st_1, at_1) st = st_1 at = at_1 index += 1 if (i_episode % 10 == 0): avg_scores.append(np.mean(temp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Episode Reward') plt.show() return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 epsilon = 0.1 . ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(Q, alpha, gamma, st, at, rt_1, st_1 = None): if st_1 == None: Q[st][at] = Q[st][at] + alpha*(rt_1 - Q[st][at]) else: Q[st][at] = Q[st][at] + alpha*((rt_1 + (gamma*np.max(Q[st_1]))) - Q[st][at]) return Q def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes convergence_iters = int(num_episodes/3) temp_scores = deque(maxlen=num_episodes) avg_scores = deque(maxlen=num_episodes) for i_episode in range(1, num_episodes+1): # monitor progress #epsilon = GLIE(convergence_iters, i_episode, 1/num_episodes) epsilon = GLIE_asymp(i_episode) #epsilon = 0.005 if i_episode % 100 == 0: print("\rEpisode {}/{} epsilon = {} .".format(i_episode, num_episodes, epsilon), end="") sys.stdout.flush() ## TODO: complete the function st = env.reset() index = 0 episode_score = 0 while True: # Choose action At at = quick_choose_At1_from_Q(env, st, Q, epsilon) # Take action At, observe (Rt+1, St+1) st_1, rt_1, done, info = env.step(at) episode_score += rt_1 if done: #print(" Episode length: " + str(index)) temp_scores.append(episode_score) Q = update_Q_sarsamax(Q, alpha, gamma, st, at, rt_1) break # Update action value Q = update_Q_sarsamax(Q, alpha, gamma, st, at, rt_1, st_1) st = st_1 index += 1 if (i_episode % 100 == 0): avg_scores.append(np.mean(temp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Episode Reward') plt.show() return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 epsilon = 0.0002 .08163265306123 . ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expected_sarsa(Q, alpha, gamma, epsilon, st, at, rt_1, st_1 = None): if st_1 == None: Q[st][at] = Q[st][at] + alpha*(rt_1 - Q[st][at]) else: expected_value = Q[st_1].dot(epsilon_greedy(Q[st_1], epsilon)) Q[st][at] = Q[st][at] + alpha*((rt_1 + (gamma*expected_value)) - Q[st][at]) return Q def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes convergence_iters = int(num_episodes/3) temp_scores = deque(maxlen=num_episodes) avg_scores = deque(maxlen=num_episodes) for i_episode in range(1, num_episodes+1): # monitor progress #epsilon = GLIE(convergence_iters, i_episode, 1/num_episodes) #epsilon = GLIE_asymp(i_episode) epsilon = 0.005 if i_episode % 100 == 0: print("\rEpisode {}/{} epsilon = {} .".format(i_episode, num_episodes, epsilon), end="") sys.stdout.flush() ## TODO: complete the function st = env.reset() index = 0 episode_score = 0 while True: # Choose action At at = quick_choose_At1_from_Q(env, st, Q, epsilon) # Take action At, observe (Rt+1, St+1) st_1, rt_1, done, info = env.step(at) episode_score += rt_1 if done: #print(" Episode length: " + str(index)) temp_scores.append(episode_score) Q = update_Q_expected_sarsa(Q, alpha, gamma, epsilon, st, at, rt_1) break # Update action value Q = update_Q_expected_sarsa(Q, alpha, gamma, epsilon, st, at, rt_1, st_1) st = st_1 index += 1 if (i_episode % 100 == 0): avg_scores.append(np.mean(temp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Episode Reward') plt.show() return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 epsilon = 0.005 . ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values import random ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 1 / i_episode state = env.reset() action = epsilon_greedy(Q, state, env.nA, epsilon) while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' if not done: next_action = epsilon_greedy(Q, next_state, env.nA, epsilon) # epsilon-greedy action Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state # S <- S' action = next_action # A <- A' if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_qlearning(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() state = env.reset() epsilon = 1. / i_episode while True: action = epsilon_greedy(Q, state, nA, epsilon) next_state, reward, done, info = env.step(action) # take action A, observe R, S' Q[state][action] = update_Q_qlearning(alpha, gamma, Q, state, action, reward, next_state) state = next_state # S <- S' if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expected_sarsa(alpha, gamma, nA, epsilon, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) policy_s = np.ones(nA) * epsilon / nA # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - epsilon + (epsilon / nA) # greedy action Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step target = reward + (gamma * Qsa_next) # construct target new_value = current + (alpha * (target - current)) # get updated value return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 0.005 state = env.reset() while True: action = epsilon_greedy(Q, state, env.nA, epsilon) next_state, reward, done, info = env.step(action) # take action A, observe R, S' Q[state][action] = update_Q_expected_sarsa(alpha, gamma, env.nA, epsilon, Q, state, action, reward, next_state) state = next_state # S <- S' if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 0.01) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output _____no_output_____ ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def get_action(Q, state, nA, eps): if random.random() > eps: action = np.argmax(Q[state]) else: action = np.random.choice(np.arange(nA)) return action def update_Q_sarsa(Q, state, action, reward, next_state, next_action, done, alpha, gamma): target = reward if not done: target += gamma*Q[next_state][next_action] new_Q_value = Q[state][action] + alpha*(target - Q[state][action]) return new_Q_value def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes nA = env.action_space.n for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = 1./i_episode state = env.reset() action = get_action(Q, state, nA, eps) while True: next_state, reward, done, info = env.step(action) next_action = get_action(Q, next_state, nA, eps) Q[state][action] = update_Q_sarsa(Q, state, action, reward, next_state, next_action, done, alpha, gamma) state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function nA = env.action_space.n eps = 1./i_episode state = env.reset() action = get_action(Q, state, nA, eps) while True: next_state, reward, done, info = env.step(action) next_action = get_action(Q, next_state, nA, eps) a = np.argmax(Q[state]) Q[state][action] = update_Q_sarsa(Q, state, action, reward, next_state, a, done, alpha, gamma) state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_probs(Q_state, eps): probs = eps*np.ones_like(Q_state)/len(Q_state) probs[np.argmax(Q_state)]+=1-eps return probs def update_Q_expected_sarsa(Q, state, action, reward, next_state, done, alpha, gamma, eps): target = reward if not done: probs = get_probs(Q[next_state], eps) target += gamma*sum(probs*Q[next_state]) new_Q_value = Q[state][action] + alpha*(target - Q[state][action]) return new_Q_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function nA = env.action_space.n eps = 0.005 #1./i_episode state = env.reset() action = get_action(Q, state, nA, eps) while True: next_state, reward, done, info = env.step(action) next_action = get_action(Q, next_state, nA, eps) Q[state][action] = update_Q_expected_sarsa(Q, state, action, reward, \ next_state, done, alpha, gamma, eps) state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random import math from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown ![](./images/cliff_human_approach.png) Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def eps_greedy(eps, state, Q, nA) : if random.random() > eps : return np.argmax(Q[state]) else : return random.choice(np.arange(env.action_space.n)) def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state = None, next_action = None) : Qsa = Q[state][action] Qsa_prime = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_prime) new_value = Qsa + alpha * (target - Qsa) return new_value def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start = 1.0, eps_decay = .99999, eps_min = .05): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) nA = env.action_space.n # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() eps = eps_start / i_episode # max(eps_start*eps_decay**i_episode, eps_min) state = env.reset() action = eps_greedy(eps, state, Q, nA) while True : next_state, reward, done, info = env.step(action) if not done : next_action = eps_greedy(eps, next_state, Q, nA) Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, \ next_state, next_action) state = next_state action = next_action if done : # last state's value always 0 Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward) break ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) nA = env.action_space.n # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() eps = 1. / i_episode # max(eps_start*eps_decay**i_episode, eps_min) state = env.reset() action = eps_greedy(eps, state, Q, nA) while True : next_state, reward, done, info = env.step(action) if not done : next_action = eps_greedy(0, next_state, Q, nA) Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, \ next_state, next_action) state = next_state action = next_action if done : # last state's value always 0 Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward) break ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected Sarsa[https://paperswithcode.com/method/expected-sarsa](https://paperswithcode.com/method/expected-sarsa) In this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._)![](./images/eq_expected_sarsa.png) ###Code def update_Q_expected_sarsa(alpha, gamma, Q, state, action, reward, nA, eps, next_state = None) : Qsa = Q[state][action] greedy_action = np.argmax(Q[next_state]) policy_s = np.ones(nA)*eps / nA policy_s[np.argmax(Q[next_state])] = 1 - eps + eps / nA #print(Q[next_state]) expected_Q = np.average(Q[next_state], weights = policy_s) target = reward + (gamma * expected_Q) new_value = Qsa + alpha * (target - Qsa) return new_value """Returns updated Q-value for the most recent experience.""" # current = Q[state][action] # estimate in Q-table (for current state, action pair) # policy_s = np.ones(nA) * eps / nA # current policy (for next state S') # policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action # Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step # target = reward + (gamma * Qsa_next) # construct target # new_value = current + (alpha * (target - current)) # get updated value # return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) nA = env.action_space.n # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() eps = 0.0005#max(1 / i_episode, eps_min) state = env.reset() while True : action = eps_greedy(eps, state, Q, nA) next_state, reward, done, info = env.step(action) Q[state][action] = update_Q_expected_sarsa(alpha, gamma, Q, state, action, reward, nA, eps, next_state) state = next_state if done : # last state's value always 0 break ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def e_greedy(actions, eps, nA): # Identify the greedy action greedy_action = np.argmax(actions) # Initialize equiprobable policy policy = np.ones(nA) * (eps / len(actions)) # Emphasizes the greedy action policy[greedy_action] += 1 - eps # Choose the next action action = np.random.choice(np.arange(nA), p=policy) return action def sarsa(env, num_episodes, alpha, gamma=1.0, eps=1.0, eps_decay=0.999, eps_min=0.1): nA = env.nA # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## DONE: complete the function # start a new episode state = env.reset() action = e_greedy(Q[state], eps, nA) while True: # make one move next_state, reward, done, info = env.step(action) # Follow the greedy policy to determine next action next_action = e_greedy(Q[next_state], eps, nA) # Calculate reward reward += (gamma * Q[next_state][next_action]) - Q[state][action] # Update Q-Table Q[state][action] += alpha * reward # Break if this episode has finished if done: break # Update variables for next step state = next_state action = next_action # Decay epsilon after each episode eps = max(eps*eps_decay, eps_min) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, 0.01, eps=1, eps_decay=0.8, eps_min=0.0001) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0, eps=1.0, eps_decay=0.999, eps_min=0.1): nA = env.nA # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## DONE: complete the function # start a new episode state = env.reset() while True: # Follow the greedy policy to determine next action action = e_greedy(Q[state], eps, nA) # make one move next_state, reward, done, info = env.step(action) # Select the next action which has the maximum reward next_action_max = e_greedy(Q[next_state], 0, nA) # Calculate reward reward += (gamma * Q[next_state][next_action_max]) - Q[state][action] # Update Q-Table Q[state][action] += alpha * reward # Break if this episode has finished if done: break # Update state for next step state = next_state # Decay epsilon after each episode eps = max(eps*eps_decay, eps_min) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, 0.01, eps=1, eps_decay=0.8, eps_min=0.0001) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_reward(actions, eps): # Calculate the reward for the equiprobable policy reward = sum(actions * (eps / len(actions))) # Identify the greedy action greedy_action = np.argmax(actions) # Add the reward for the greedy action reward += actions[greedy_action] * (1 - eps) return reward def expected_sarsa(env, num_episodes, alpha, gamma=1.0, eps=1.0, eps_decay=0.999, eps_min=0.1): nA = env.nA # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## DONE: complete the function # start a new episode state = env.reset() while True: # Follow the greedy policy to determine next action action = e_greedy(Q[state], eps, nA) # make one move next_state, reward, done, info = env.step(action) # Calculate the expected reward expected_reward = expected_reward(Q[next_state], eps) # Calculate the reward reward += (gamma * expected_reward) - Q[state][action] # Update Q-Table Q[state][action] += alpha * reward # Break if this episode has finished if done: break # Update state for next step state = next_state # Decay epsilon after each episode eps = max(eps*eps_decay, eps_min) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 1, eps=0.005, eps_decay=1, eps_min=0.005) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output D:\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\cbook\__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " D:\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\cbook\__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " D:\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\cbook\__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " D:\Anaconda3\envs\tensorflow\lib\site-packages\matplotlib\cbook\__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy_action(epsilon, Qs, nA): # epsilon-greedy policy policy_s = np.ones(nA) * epsilon / nA greedy_action = np.argmax(Qs) policy_s[greedy_action] = 1 - epsilon + epsilon / nA # get action from epsilon-greedy from Q action = np.random.choice(np.arange(nA), p=policy_s) return action def sarsa(env, num_episodes, alpha, gamma=1.0): # set constant epsilon-greedy epsilon = 0.1 # get number of actions nA = env.action_space.n # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # observe init state state = env.reset() # choose init action action = epsilon_greedy_action(epsilon, Q[state], nA) while True: # take action and observe next state and reward next_state, reward, done, _ = env.step(action) next_action = epsilon_greedy_action(epsilon, Q[state], nA) # update Q using Sarsa old_Q = Q[state][action] Q[state][action] = old_Q + alpha * (reward + gamma * Q[next_state][next_action] - old_Q ) # update state and action state = next_action action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys IN_COLAB = 'google.colab' in sys.modules if IN_COLAB: !wget -nc -q https://raw.githubusercontent.com/joaopamaral/deep-reinforcement-learning/master/temporal-difference/check_test.py !wget -nc -q https://raw.githubusercontent.com/joaopamaral/deep-reinforcement-learning/master/temporal-difference/plot_utils.py import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /usr/local/lib/python3.6/dist-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state, next_action) state = next_state # S <- S' action = next_action # A <- A' if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 # value of next state target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Q-Learning - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): learning rate gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) policy_s = np.ones(nA) * eps / nA # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step target = reward + (gamma * Qsa_next) # construct target new_value = current + (alpha * (target - current)) # get updated value return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Expected SARSA - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): step-size parameters for the update step gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 0.005 # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score # update Q Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import random import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state # S <- S' action = next_action # A <- A' else: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 # value of next state target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Q-Learning - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): learning rate gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) policy_s = np.ones(nA) * eps / nA # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step target = reward + (gamma * Qsa_next) # construct target new_value = current + (alpha * (target - current)) # get updated value return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Expected SARSA - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): step-size parameters for the update step gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 0.005 # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score # update Q Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code Q = defaultdict(lambda: np.zeros(env.nA)) Q[36] def get_probs(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_min=0.0001,plot_every=100): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes epsilon = eps_start for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function #epsilon = max(epsilon/i_episode, eps_min) epsilon = epsilon/i_episode state = env.reset() action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() score = 0 while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = np.random.choice(np.arange(nA), p=get_probs(Q[next_state], epsilon, nA)) \ if state in Q else env.action_space.sample() Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state action = next_action if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) #policy = dict((k,np.argmax(v)) for k, v in Q.items()) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start=1.0,eps_decay=0.9, eps_min=0.001,plot_every=100): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes epsilon = eps_start for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = max(epsilon*eps_decay, eps_min) #epsilon = epsilon/i_episode state = env.reset() action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() score = 0 while True: next_state, reward, done, info = env.step(action) score += reward if not done: old_Q = Q[state][action] target = reward + gamma*np.max(Q[next_state]) Q[state][action] = old_Q + alpha*(target - old_Q) state = next_state action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() if done: old_Q = Q[state][action] target = reward Q[state][action] = old_Q + alpha*(target - old_Q) tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) #policy = dict((k,np.argmax(v)) for k, v in Q.items()) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expected_sarsa(alpha, gamma,epsilon,nA, Q, state, action, reward, next_state): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step policy_s=get_probs(Q[state], epsilon, nA) Qsa_next = np.dot(Q[next_state], policy_s) target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0,eps_start=1.0,eps_decay=0.9, eps_min=0.005,plot_every=100): nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes #epsilon = eps_start epsilon = 0.005 for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function #epsilon = max(epsilon*eps_decay, eps_min) #epsilon = epsilon/i_episode state = env.reset() action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() score = 0 while True: next_state, reward, done, info = env.step(action) score += reward Q[state][action] = update_Q_expected_sarsa(alpha, gamma,epsilon,nA, Q, state, action, reward, next_state) state = next_state action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() if done: tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) #policy = dict((k,np.argmax(v)) for k, v in Q.items()) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 0.5) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output /Users/Tristan/anaconda3/lib/python3.7/site-packages/gym/envs/registration.py:14: PkgResourcesDeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately. result = entry_point.load(False) ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /Users/Tristan/anaconda3/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " /Users/Tristan/anaconda3/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " /Users/Tristan/anaconda3/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " /Users/Tristan/anaconda3/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(Q, state, action, reward, alpha, gamma, next_state=None, next_action=None): Qsa_current = Q[state][action] Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + gamma*Qsa_next Q_updated = Qsa_current + alpha*(target - Qsa_current) return Q_updated def eps_greedy(Q, state, nA, eps): if np.random.random() > eps: # Choose greedy argmax action = np.argmax(Q[state]) else: # Choose equirandom state from A action space action = np.random.choice(np.arange(env.action_space.n)) return action def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=0.2, eps_decay=0.9999, eps_min=0.01, plot_every=100): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes eps = eps_start for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 state = env.reset() #eps = max(eps*eps_decay, eps_min) eps = 1.0/i_episode action = eps_greedy(Q, state, nA, eps) # Run one step ahead (not full ep) while True: # Take action A_t and observe R_{t+1}, S_{t+1} next_state, reward, done, info = env.step(action) score += reward if not done: # Choose action A_{t+1} using e-greedy policy derived from Q next_action = eps_greedy(Q, next_state, nA, eps) # Update the Q table for Q(S_t,A_t) using Q(S_{t+1},A_{t+1}) Q[state][action] = update_Q_sarsa(Q, state, action, reward, alpha, gamma, next_state, next_action) state = next_state # s -> s' action = next_action # a -> a' if done: Q[state][action] = update_Q_sarsa(Q, state, action, reward, alpha, gamma) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Mistakes made* Forgot to make next_state, next_action = None on the last state of an episode* Passed state S_t not S_{t+1} into epsilon-greedy to get the next next action - remember we evaluate a further step ahead and already have A_t from env.step(state) but need A_{t+1} which comes from S_{t+1} ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): Qsa_current = Q[state][action] Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 target = reward + gamma*Qsa_next Q_updated = Qsa_current + alpha*(target - Qsa_current) return Q_updated def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Q-Learning - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): learning rate gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon while True: action = eps_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expectedsarsa(alpha, gamma, Q, state, action, reward, eps, nA, next_state=None): Qsa_current = Q[state][action] policy_s = np.ones(nA) * eps / nA policy_s[np.argmax(Q[next_state])] += 1 - eps # Now compute expectation over all actions for S_{t+1} Qsa_next = np.dot(policy_s, Q[next_state]) # if next_state is not None else 0 target = reward + gamma* Qsa_next Q_updated = Qsa_current + alpha*(target - Qsa_current) return Q_updated def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100, eps_decay=0.9999, eps_min=0, eps_start=0.0001): nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes eps = eps_start for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode #eps = 1.0 / i_episode # set value of epsilon #eps = 0.005 eps = max(eps_min, eps*eps_decay) while True: action = eps_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score Q[state][action] = update_Q_expectedsarsa(alpha, gamma, Q, \ state, action, reward, eps, nA, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) [(key,Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)] [(key,np.max(Q_expsarsa[key]),np.argmax(Q_expsarsa[key])) if key in Q_expsarsa else -1 for key in np.arange(48)] ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output _____no_output_____ ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """returns updated Q-value for the most recent experience""" current = Q[state][action] Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + gamma * Qsa_next new_value = current + alpha * (target - current) return new_value def epsilon_greedy(Q, state, nA, eps): if random.random() > eps: return np.argmax(Q[state]) else: return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) nA = env.nA # initialize performance monitor # loop over episodes tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episode for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 state = env.reset() eps = 1.0 / i_episode # more episode, more rely on accumulated knowledge action = epsilon_greedy(Q, state, nA, eps) while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state, next_action) state = next_state action = next_action # epsilon_greedy(Q, state, nA, eps) if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward) tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0, num_episodes, len(avg_scores), endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episode)' % plot_every) plt.show() print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa_max(alpha, gamma, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience """ current_q_value = Q[state][action] next_q_value = np.max(Q[next_state]) if next_state is not None else 0 target_value = reward + gamma * next_q_value new_q_value = current_q_value + alpha * ( target_value - current_q_value) return new_q_value def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) nA = env.nA # initialize performance monitor # loop over episodes tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episode for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 state = env.reset() eps = 1.0 / i_episode # more episode, more rely on accumulated knowledge while True: action = epsilon_greedy(Q, state, nA, eps) next_state, reward, done, info = env.step(action) score += reward Q[state][action] = update_Q_sarsa_max(env, gamma, Q, state, action, reward, next_state) state = next_state if done: tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0, num_episodes, len(avg_scores), endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episode)' % plot_every) plt.show() print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa_expected(alpha, gamma, Q, state, action, reward, epsilon, nA, next_state=None): """Returns updated Q-value for the most recent experience """ current_q_value = Q[state][action] policy_epsilon = np.ones(nA) * epsilon / nA policy_epsilon[np.argmax(Q[state])] = 1 - epsilon + epsilon / nA next_q_vector = Q[next_state] if next_state is not None else np.zeros(nA) next_q_value = np.dot( next_q_vector , policy_epsilon) target_value = reward + gamma * next_q_value new_q_value = current_q_value + alpha * ( target_value - current_q_value) return new_q_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) nA = env.nA # initialize performance monitor # loop over episodes tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episode for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 state = env.reset() eps = 0.005 # 1.0 / i_episode # more episode, more rely on accumulated knowledge while True: action = epsilon_greedy(Q, state, nA, eps) next_state, reward, done, info = env.step(action) score += reward Q[state][action] = update_Q_sarsa_expected(env, gamma, Q, state, action, reward, eps, nA, next_state) state = next_state if done: tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0, num_episodes, len(avg_scores), endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episode)' % plot_every) plt.show() return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt import pandas as pd import seaborn as sns %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output C:\Users\cheng\Anaconda3\lib\site-packages\matplotlib\cbook\deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) C:\Users\cheng\Anaconda3\lib\site-packages\matplotlib\cbook\deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) C:\Users\cheng\Anaconda3\lib\site-packages\matplotlib\cbook\deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) C:\Users\cheng\Anaconda3\lib\site-packages\matplotlib\cbook\deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def action_probs(action_values, epsilon): nA = len(action_values) probs = np.full(nA, epsilon/nA) probs[np.argmax(action_values)] += 1-epsilon return probs def pick_action(Q, state, epsilon): action_values=Q[state] nA = len(action_values) return np.random.choice(np.arange(nA), p=action_probs(action_values=action_values, epsilon=epsilon)) action_probs(action_values=[1.0, 3.0, 2.0], epsilon=0.05) def get_epsilon(i_episode, num_episodes, epsilon_decay=0.9): return epsilon_decay**i_episode # def get_epsilon(i_episode, num_episodes): # return 1.0/i_episode print(get_epsilon(i_episode=5, num_episodes=10000)) print(get_epsilon(i_episode=100, num_episodes=10000)) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every_episode=100): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) stats = [] # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 epsilon = get_epsilon(i_episode=i_episode, num_episodes=num_episodes) state = env.reset() action = pick_action(Q, state, epsilon) while True: next_state, reward, done, info = env.step(action) next_action = pick_action(Q, next_state, epsilon) if done: next_action_value = 0 else: next_action_value = Q[next_state][next_action] target = reward + gamma*next_action_value action_value = Q[state][action] Q[state][action] = action_value + alpha*(target-action_value) state = next_state action = next_action score += reward if done: break if ((i_episode-1) % plot_every_episode) == 0: stats.append([i_episode, score]) stats = np.array(stats) df = pd.DataFrame(data={"Episode number": stats[:,0], "Total rewards": stats[:,1]}) df.plot.line(x="Episode number", y="Total rewards") return Q sample_Q = sarsa(env=env, num_episodes=3, alpha=0.1) print(sample_Q) ###Output defaultdict(<function sarsa.<locals>.<lambda> at 0x000000000B52A268>, {36: array([ -57.62609698, -182.07363858, -76.50852227, -103.80574142]), 24: array([-28.15879181, -60.2511319 , -87.95992323, -55.64857583]), 12: array([-18.04763556, -21.33763061, -43.52985618, -26.21069648]), 13: array([-16.07617169, -20.43143077, -54.93370365, -19.65628788]), 25: array([ -29.52990975, -35.32771614, -188.23873024, -32.57963832]), 14: array([-11.30893533, -12.50386304, -62.31322334, -17.15226423]), 2: array([-11.93446702, -8.56938733, -24.060686 , -18.89294994]), 3: array([ -6.28064532, -5.45427481, -14.68632791, -10.80132562]), 15: array([ -6.90679961, -7.50230442, -26.20130053, -13.92965434]), 27: array([ -10.80609975, -24.39964282, -148.25986539, -29.98179526]), 16: array([ -3.74325116, -3.68191299, -22.47733453, -7.12375008]), 1: array([-15.05479571, -12.78369381, -28.58518389, -17.00094015]), 0: array([-16.18151525, -13.9802856 , -27.71139804, -14.93115893]), 26: array([ -11.78092388, -33.96912167, -170.40149815, -31.07316115]), 28: array([ -9.11466657, -10.99295425, -119.47052853, -11.2772709 ]), 4: array([ -4.1754732 , -3.04367032, -10.34781271, -5.18562679]), 17: array([ -2.4768078 , -3.68031282, -47.66638022, -5.02142559]), 29: array([ -1.33709803, -9.63353719, -143.39040259, -2.78436971]), 30: array([ -2.69105373, -2.0767675 , -94.27383099, -4.9924747 ]), 31: array([ -1.13158309, -6.64084039, -73.13129705, -1.4197488 ]), 5: array([-2.95952945, -2.75159893, -6.23845842, -3.50212181]), 18: array([-2.56476054, -2.61219207, -9.01560704, -5.76087973]), 6: array([-2.85924957, -2.2833046 , -4.44734571, -2.82914837]), 19: array([-1.54785959, -1.97936576, -7.8256273 , -3.92586934]), 20: array([ -1.06720677, -1.12700775, -13.67785535, -1.11082602]), 32: array([ -0.54854219, -1.09972353, -105.21476812, -1.53174769]), 7: array([-2.01942914, -1.94068054, -2.57981256, -1.91820121]), 8: array([-1.49286563, -1.33395249, -1.96209804, -1.84737563]), 33: array([ -0.32518601, -5.38179948, -57.20062405, -23.39847306]), 9: array([-1.45028937, -1.31364007, -1.97931278, -1.4200348 ]), 21: array([-0.85921876, -1.0902151 , -7.19254906, -0.82344264]), 22: array([-1.16986783, -0.96962742, -2.78828285, -1.06019707]), 10: array([-1.12977866, -1.11650826, -1.30703852, -1.29712047]), 11: array([-0.91407602, -0.6798045 , -0.8855685 , -1.17520619]), 34: array([ -0.50350691, -0.51432447, -54.9474124 , -1.90933509]), 35: array([-0.65670923, -0.3051991 , -0.271 , -0.49072866]), 23: array([-0.92108665, -0.98928702, -0.73523556, -0.77779196]), 47: array([0., 0., 0., 0.])}) ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every_episode=100): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) stats = [] # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 epsilon = get_epsilon(i_episode=i_episode, num_episodes=num_episodes) state = env.reset() while True: action = pick_action(Q, state, epsilon) next_state, reward, done, info = env.step(action) if done: max_next_state_value = 0 else: max_next_state_value = np.max(Q[next_state]) target = reward + gamma*max_next_state_value action_value = Q[state][action] Q[state][action] = action_value + alpha*(target-action_value) state = next_state score += reward if done: break if (i_episode % plot_every_episode) == 0: stats.append([i_episode, score]) stats = np.array(stats) df = pd.DataFrame(data={"Episode number": stats[:,0], "Total rewards": stats[:,1]}) df.plot.line(x="Episode number", y="Total rewards") return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every_episode=100): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) stats = [] # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 epsilon = get_epsilon(i_episode=i_episode, num_episodes=num_episodes) state = env.reset() nA = env.action_space.n while True: probs = action_probs(action_values=Q[state], epsilon=epsilon) action = np.random.choice(np.arange(nA), p=probs) next_state, reward, done, info = env.step(action) if done: expected_next_state_value = 0 else: expected_next_state_value = np.dot(probs, Q[next_state]) target = reward + gamma*expected_next_state_value action_value = Q[state][action] Q[state][action] = action_value + alpha*(target-action_value) state = next_state score += reward if done: break if (i_episode % plot_every_episode) == 0: stats.append([i_episode, score]) stats = np.array(stats) df = pd.DataFrame(data={"Episode number": stats[:,0], "Total rewards": stats[:,1]}) df.plot.line(x="Episode number", y="Total rewards") return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 0.01) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_probs(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q_s) policy_s[best_a] += 1 - epsilon return policy_s def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes + 1): # monitor progress if i_episode % 100 == 0: print(f"\rEpisode {i_episode}/{num_episodes}", end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() # set the value of epsilon epsilon = 1 / i_episode while True: action = np.random.choice(np.arange(env.nA), p=get_probs(Q[state], epsilon, env.nA)) \ if state in Q else env.action_space.sample() next_state, reward, done, info = env.step(action) next_action = np.random.choice(np.arange(env.nA), p=get_probs(Q[next_state], epsilon, env.nA)) \ if state in Q else env.action_space.sample() Q[state][action] += alpha * (reward + gamma * Q[next_state][next_action] - Q[state][action]) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes + 1): # monitor progress if i_episode % 100 == 0: print(f"\rEpisode {i_episode}/{num_episodes}", end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() # set the value of epsilon epsilon = 1 / i_episode while True: action = np.random.choice(np.arange(env.nA), p=get_probs(Q[state], epsilon, env.nA)) \ if state in Q else env.action_space.sample() next_state, reward, done, info = env.step(action) Q[state][action] += alpha * (reward + gamma * max(Q[next_state]) - Q[state][action]) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes + 1): # monitor progress if i_episode % 100 == 0: print(f"\rEpisode {i_episode}/{num_episodes}", end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() # set the value of epsilon epsilon = 0.005 while True: action = np.random.choice(np.arange(env.nA), p=get_probs(Q[state], epsilon, env.nA)) \ if state in Q else env.action_space.sample() next_state, reward, done, info = env.step(action) prob = get_probs(Q[next_state], epsilon, env.nA) expt = np.sum(prob * Q[next_state]) Q[state][action] += alpha * (reward + gamma * expt - Q[state][action]) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) import os os.l ###Output 'pwd'은(는) 내부 또는 외부 명령, 실행할 수 있는 프로그램, 또는 배치 파일이 아닙니다. ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): epsilon = max(0.2, 1.0*(1-(i_episode/num_episodes))) # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}, epsilon: {}".format(i_episode, num_episodes, epsilon), end="") sys.stdout.flush() ## TODO: complete the function' state = env.reset() done = False while not done: #calculate an action rand = random.uniform(0, 1) if rand < epsilon: action_0 = np.random.choice(np.arange(env.nA)) else: action_0 = np.argmax(Q[state]) state_0, reward_0, done, info = env.step(action_0) if done: break #print("next_state: {}, reward: {}, done: {}, info: {}".format(next_state, reward, done, info)) #take the next action (use greedy) action_1 = np.argmax(Q[state_0]) state_1, reward_1, done, info = env.step(action_1) #update the Q value of the previous(current) state current_state_action_reward = Q[state][action_0] Q[state][action_0] = \ (1-alpha)*current_state_action_reward + alpha*(reward_0 + reward_1) state = state_1 return Q Q = sarsa(env, 1000, .04) print("") keys = sorted(Q.keys()) for key in keys: print("state: {}, rewards: {}".format(key, Q[key])) # UP = 0 # RIGHT = 1 # DOWN = 2 # LEFT = 3 ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000, epsilon: 0.21999999999999997 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): epsilon = max(0.2, 1.0*(1-(i_episode/num_episodes))) # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}, epsilon: {}".format(i_episode, num_episodes, epsilon), end="") sys.stdout.flush() ## TODO: complete the function' state = env.reset() done = False while not done: #calculate an action rand = random.uniform(0, 1) if rand < epsilon: action = np.random.choice(np.arange(env.nA)) else: action = np.argmax(Q[state]) next_state, reward, done, info = env.step(action) #print("next_state: {}, reward: {}, done: {}, info: {}".format(next_state, reward, done, info)) #get best reward possible from next state next_state_max_reward_action = np.argmax(Q[next_state]) next_state_max_reward = Q[next_state][next_state_max_reward_action] #update the Q value of the previous(current) state current_state_action_reward = Q[state][action] Q[state][action] = \ (1-alpha)*current_state_action_reward + alpha*(reward + next_state_max_reward) state = next_state return Q Q = q_learning(env, 10000, .04) print("") keys = sorted(Q.keys()) for key in keys: print("state: {}, rewards: {}".format(key, Q[key])) # UP = 0 # RIGHT = 1 # DOWN = 2 # LEFT = 3 ###Output Episode 10000/10000, epsilon: 0.2999999999999996 state: 0, rewards: [-15. -14. -14. -15.] state: 1, rewards: [-14. -13. -13. -15.] state: 2, rewards: [-13. -12. -12. -14.] state: 3, rewards: [-12. -11. -11. -13.] state: 4, rewards: [-11. -10. -10. -12.] state: 5, rewards: [-10. -9. -9. -11.] state: 6, rewards: [ -9. -8. -8. -10.] state: 7, rewards: [-8. -7. -7. -9.] state: 8, rewards: [-7. -6. -6. -8.] state: 9, rewards: [-6. -5. -5. -7.] state: 10, rewards: [-5. -4. -4. -6.] state: 11, rewards: [-4. -4. -3. -5.] state: 12, rewards: [-15. -13. -13. -14.] state: 13, rewards: [-14. -12. -12. -14.] state: 14, rewards: [-13. -11. -11. -13.] state: 15, rewards: [-12. -10. -10. -12.] state: 16, rewards: [-11. -9. -9. -11.] state: 17, rewards: [-10. -8. -8. -10.] state: 18, rewards: [-9. -7. -7. -9.] state: 19, rewards: [-8. -6. -6. -8.] state: 20, rewards: [-7. -5. -5. -7.] state: 21, rewards: [-6. -4. -4. -6.] state: 22, rewards: [-5. -3. -3. -5.] state: 23, rewards: [-4. -3. -2. -4.] state: 24, rewards: [-14. -12. -14. -13.] state: 25, rewards: [ -13. -11. -113. -13.] state: 26, rewards: [ -12. -10. -113. -12.] state: 27, rewards: [ -11. -9. -113. -11.] state: 28, rewards: [ -10. -8. -113. -10.] state: 29, rewards: [ -9. -7. -113. -9.] state: 30, rewards: [ -8. -6. -113. -8.] state: 31, rewards: [ -7. -5. -113. -7.] state: 32, rewards: [ -6. -4. -113. -6.] state: 33, rewards: [ -5. -3. -113. -5.] state: 34, rewards: [ -4. -2. -113. -4.] state: 35, rewards: [-3. -2. -1. -3.] state: 36, rewards: [ -13. -113. -14. -14.] state: 47, rewards: [0. 0. 0. 0.] ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .02) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000, epsilon: 0.21999999999999997 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): epsilon = max(0.0, 1.0*(1.0-(i_episode/num_episodes))) # monitor progress if i_episode % 10 == 0: print("\rEpisode {}/{}, epsilon: {}".format(i_episode, num_episodes, epsilon), end="") sys.stdout.flush() ## TODO: complete the function' state = env.reset() done = False while not done: #calculate an action rand = random.uniform(0, 1) if rand < epsilon: action = np.random.choice(np.arange(env.nA)) else: action = np.argmax(Q[state]) next_state, reward, done, info = env.step(action) #print("next_state: {}, reward: {}, done: {}, info: {}".format(next_state, reward, done, info)) #get best reward possible from next state # next_state_max_reward_action = np.argmax(Q[next_state]) # next_state_max_reward = Q[next_state][next_state_max_reward_action] #generate next_state_max_reward based upon probabilities of next actions next_state_max_index = np.argmax(Q[next_state]) next_state_weighted_reward = 0 # rand = random.uniform(0, 1) # if rand < epsilon: # # for next_action_index in range(env.nA): # # next_state_weighted_reward += (Q[next_state][next_action_index] / env.nA)' # rand_action = np.random.choice(np.arange(env.nA)) # next_state_weighted_reward = Q[next_state][rand_action] # else: # next_state_weighted_reward = Q[next_state][next_state_max_index] rand_action = np.random.choice(np.arange(env.nA)) next_state_weighted_reward += epsilon * Q[next_state][rand_action] / env.nA next_state_weighted_reward += (1.0-epsilon) * Q[next_state][next_state_max_index] #next_state_weighted_reward = Q[next_state][next_state_max_index] #print("{} - {}".format(next_state_weighted_reward, Q[next_state])) #update the Q value of the previous(current) state current_state_action_reward = Q[state][action] Q[state][action] = \ (1-alpha)*current_state_action_reward + alpha*(reward + next_state_weighted_reward) state = next_state return Q Q = expected_sarsa(env, 10000, .04) print() keys = sorted(Q.keys()) for key in keys: print("state: {}, rewards: {}".format(key, Q[key])) # UP = 0 # RIGHT = 1 # DOWN = 2 # LEFT = 3 ###Output Episode 10000/10000, epsilon: 0.010000000000000009 state: 0, rewards: [-11.19216464 -11.21070364 -11.22118442 -11.20211127] state: 1, rewards: [-10.75063005 -10.74129132 -10.78543601 -10.77020423] state: 2, rewards: [-10.12613844 -10.13409404 -10.15896506 -10.16337335] state: 3, rewards: [-9.4465631 -9.44784059 -9.47097314 -9.46094954] state: 4, rewards: [-8.72227666 -8.71103811 -8.73772988 -8.74104803] state: 5, rewards: [-7.93633347 -7.94113959 -7.96347065 -7.97060325] state: 6, rewards: [-7.16323951 -7.1529291 -7.15037649 -7.15992603] state: 7, rewards: [-6.33936928 -6.3427328 -6.34180433 -6.33760283] state: 8, rewards: [-5.51419137 -5.50969735 -5.50605364 -5.55132544] state: 9, rewards: [-4.6560232 -4.66054869 -4.66571893 -4.70245499] state: 10, rewards: [-3.80928165 -3.80439299 -3.80591884 -4.09354459] state: 11, rewards: [-3.1493888 -3.15461852 -2.93744656 -3.52809045] state: 12, rewards: [-11.64306492 -11.65645568 -11.65352925 -11.66540771] state: 13, rewards: [-11.0361589 -11.02460088 -11.04937713 -11.06217688] state: 14, rewards: [-10.32875506 -10.30522671 -10.33004319 -10.30171672] state: 15, rewards: [-9.51709675 -9.5183506 -9.54121828 -9.56596428] state: 16, rewards: [-8.68816568 -8.67474334 -8.69736442 -8.7133746 ] state: 17, rewards: [-7.80964709 -7.78184855 -7.80843507 -7.8010638 ] state: 18, rewards: [-6.87929802 -6.86372997 -6.86164735 -6.88772847] state: 19, rewards: [-5.93005383 -5.92576104 -5.94260871 -5.9298708 ] state: 20, rewards: [-5.21433543 -4.96511842 -5.579359 -5.49364873] state: 21, rewards: [-4.65688914 -3.98489166 -5.6431844 -4.85594118] state: 22, rewards: [-4.15581153 -2.99473677 -3.54325464 -4.23481407] state: 23, rewards: [-3.55144659 -2.79652093 -1.99901763 -3.57292219] state: 24, rewards: [-12.3510174 -12.34206079 -12.38793334 -12.37197736] state: 25, rewards: [ -11.50351414 -11.48309554 -107.40264718 -11.50827429] state: 26, rewards: [ -10.66255192 -10.63935671 -106.25651075 -10.69065903] state: 27, rewards: [ -9.80888613 -9.80846174 -106.38553675 -9.85615177] state: 28, rewards: [ -8.99326729 -8.9715428 -106.53849923 -9.02412286] state: 29, rewards: [ -8.09809904 -8.07767017 -104.8171996 -8.21439004] state: 30, rewards: [ -7.36749449 -7.07314798 -105.9963116 -7.41201732] state: 31, rewards: [ -6.1600151 -6.08314335 -105.46459366 -6.19296138] state: 32, rewards: [ -5.19557487 -5.16908551 -105.02802802 -6.08149809] state: 33, rewards: [ -4.32707215 -6.73352001 -105.40465287 -5.97691553] state: 34, rewards: [ -3.01284157 -1.95869848 -105.26955141 -6.32057643] state: 35, rewards: [-2.81066252 -1.95112909 -1. -3.45634339] state: 36, rewards: [ -13.22586881 -108.53050967 -13.25312108 -13.24579392] state: 47, rewards: [0. 0. 0. 0.] ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000, epsilon: 0.010000000000000009 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random import math from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def decay_epsilon(eps_current, eps_decay, eps_min): eps_decayed = eps_current * eps_decay return eps_decayed if eps_decayed > eps_min else eps_min def test_decay_epsilon(): for eps_decay in [1, 0.5, 0.3, 0.1, 0.01]: eps_decayed = decay_epsilon(1.0, eps_decay, 0.1) print("eps_decayed(1.0, {eps_decay}, 0.1): {eps_decayed}".format(**locals())) test_decay_epsilon() def get_action_with_highest_reward(actions): # Initialize best_action and reward to the first index. best_action = 0 reward = actions[0] for i in range(len(actions)): if actions[i] > reward: reward = actions[i] best_action = i return best_action def test_get_action_with_highest_reward(): stateActionDict = { 3: [-0.9, -1.0], 6: [-0.4, -0.2], 12: [1, 10], 16: [3, 8], 20: [10, 1] } for i, key in enumerate(stateActionDict): best_action = get_action_with_highest_reward(stateActionDict[key]) print("key: ", key, ", best action: ", best_action) test_get_action_with_highest_reward() def get_probabilities_epsilon_greedy(epsilon): # 1 - epsilon = action with highest rewards for the state # epsilon = random action # probability for random action will be 1 / numPossibleActions (which is 4 for cliff walking) # Return an array of 5 elements, in terms of probability of choosing the following actions: # 1. best current action # 2. UP (0) # 3. RIGHT (1) # 4. DOWN (2) # 5. LEFT (3) return [1.0 - epsilon, epsilon / 4, epsilon / 4, epsilon / 4, epsilon / 4] def test_get_probabilities_epsilon_greedy(): for epsilon in [1.0, 0.7, 0.3, 0.1]: print("epsilon: ", epsilon, ", probs: ", get_probabilities_epsilon_greedy(epsilon)) test_get_probabilities_epsilon_greedy() def choose_action_epsilon_greedy(Q, state, epsilon): probs = get_probabilities_epsilon_greedy(epsilon) # Get current best action for the state best_action = get_action_with_highest_reward(Q[state]) # Choose an action, based on epsilon probability action = np.random.choice(np.array([best_action, 0, 1, 2, 3]), p=probs) return action def generate_episode_sarsa(env, epsilon, Q, alpha, gamma): episode = [] state = env.reset() while True: action = choose_action_epsilon_greedy(Q, state, epsilon) next_state, reward, done, info = env.step(action) #print("state: ", state, ", existing_q_value: ", Q[state][action], ", action: ", action, ", reward: ", reward, ", done: ", done, ", next_state: ", next_state) expected_next_reward = 0.0 if not done: # Choose our next hypothetical action, based on epsilon-greedy Q-value for next_state. expected_next_action = choose_action_epsilon_greedy(Q, next_state, epsilon) # Calculate the Q-value expected_next_reward = Q[next_state][eps_greedy_next_action] # Update our Q Table for this particular state and action. new_q_value = Q[state][action] + alpha * (reward + (gamma * expected_next_reward) - Q[state][action]) # Update Q value Q[state][action] = new_q_value #print("state: ", state, ", new_q_value: ", new_q_value) episode.append((state, action, reward)) state = next_state if done: break return episode def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes # Initialize epsilon epsilon = eps_start for i_episode in range(1, num_episodes+1): # monitor progress # if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # Decay epsilon if i_episode > 1: epsilon = decay_epsilon(epsilon, eps_decay, eps_min) # Generate episode episode = generate_episode_sarsa(env, epsilon, Q, alpha, gamma) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function #Q_sarsa = sarsa(env, 5000, .01) Q_sarsa = sarsa(env, 100, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 100/100 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods. --- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import matplotlib.pyplot as plt from collections import defaultdict %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Create an instance of the `CliffWalking` environment ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /home/jibin/miniconda3/envs/reinforcement-learning/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " /home/jibin/miniconda3/envs/reinforcement-learning/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " /home/jibin/miniconda3/envs/reinforcement-learning/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " /home/jibin/miniconda3/envs/reinforcement-learning/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " ###Markdown Part 1: TD Control: Sarsa (`Sarsa(0)`)In this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`. _**Notes for myself**: In my initial implementation, I didn't differenciate the case where next state is None (i.e., current state is the terminate state)._ ###Code def sarsa(env, num_episodes, alpha=0.02, gamma=1, epsilon_init=1, epsilon_decay=0.999, epsilon_min=0.00001): Q = defaultdict(lambda: [0] * env.action_space.n) epsilon = epsilon_init for i in range(1, num_episodes + 1): if i % 100 == 0: print(f'\rEpisode: {i}/{num_episodes}.', end='') sys.stdout.flush() run_episode_with_sarsa(env, Q, alpha, gamma, epsilon) epsilon = max(epsilon * epsilon_decay, epsilon_min) return Q def run_episode_with_sarsa(env, Q, alpha, gamma, epsilon): state = env.reset() action = pick_action_using_epsilon_greedy(Q, state, epsilon) for _ in range(500): # limit steps to prevent too long episode new_state, reward, done, info = env.step(action) if not done: new_action = pick_action_using_epsilon_greedy(Q, new_state, epsilon) update_Q_sarsa(Q, state, action, reward, new_state, new_action, alpha, gamma) state, action = new_state, new_action else: update_Q_sarsa(Q, state, action, reward, None, None, alpha, gamma) return def pick_action_using_epsilon_greedy(Q, state, epsilon): action_values = Q[state] if np.random.random_sample() > epsilon: return np.argmax(action_values) else: return np.random.choice(list(range(len(action_values)))) def update_Q_sarsa(Q, state, action, reward, new_state, new_action, alpha, gamma): # Have to use `if new_state is not None` instead of `if new_state` q_of_new_state_action_pair = Q[new_state][new_action] if new_state is not None else 0 approx_return = reward + gamma * q_of_new_state_action_pair Q[state][action] = Q[state][action] + alpha * (approx_return - Q[state][action]) return ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01, epsilon_decay=0.5) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode: 5000/5000. ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`. ###Code def q_learning(env, num_episodes, alpha, gamma=1, epsilon_init=1.0, epsilon_decay=0.999, epsilon_min=0.00001): Q = defaultdict(lambda: [0] * env.action_space.n) epsilon = epsilon_init for i in range(1, num_episodes+1): if i % 100 == 0: print(f'\rEpisode {i}/{num_episodes}.', end='') sys.stdout.flush() run_episode_with_q_learning(env, Q, alpha, gamma, epsilon) epsilon = min(epsilon * epsilon_decay, epsilon_min) return Q def run_episode_with_q_learning(env, Q, alpha, gamma, epsilon): state = env.reset() for _ in range(500): # limit steps in each episode action = pick_action_using_epsilon_greedy(Q, state, epsilon) new_state, reward, done, info = env.step(action) if not done: update_Q_using_q_learning(Q, state, action, reward, new_state, alpha, gamma) state = new_state else: update_Q_using_q_learning(Q, state, action, reward, None, alpha, gamma) return def update_Q_using_q_learning(Q, state, action, reward, new_state, alpha, gamma): max_q = max(Q[new_state]) if new_state is not None else 0 approx_return = reward + gamma * max_q Q[state][action] += alpha * (approx_return - Q[state][action]) return ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 3000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 3000/3000. ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`. ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1, epsilon_init=1, epsilon_decay=0.99, epsilon_min=0.00001): Q = defaultdict(lambda: np.zeros(env.action_space.n)) epsilon = epsilon_init for i in range(1, num_episodes+1): if i % 100 == 0: print(f'\rEpisode {i}/{num_episodes}', end='') sys.stdout.flush() run_episode_with_expected_sarsa(env, Q, alpha, gamma, epsilon) epsilon = max(epsilon * epsilon_decay, epsilon_min) return Q def run_episode_with_expected_sarsa(env, Q, alpha, gamma, epsilon): state = env.reset() while True: action = pick_action_using_epsilon_greedy(Q, state, epsilon) new_state, reward, done, info = env.step(action) if not done: update_Q_using_expected_sarsa(Q, state, action, reward, new_state, alpha, gamma, epsilon) state = new_state else: update_Q_using_expected_sarsa(Q, state, action, reward, None, alpha, gamma, None) return def update_Q_using_expected_sarsa(Q, state, action, reward, new_state, alpha, gamma, epsilon): expected_q = calculate_expected_q(Q, new_state, epsilon) if new_state is not None else 0 approx_return = reward + gamma * expected_q Q[state][action] += alpha * (approx_return - Q[state][action]) return def calculate_expected_q(Q, new_state, epsilon): action_values = Q[new_state] best_action = np.argmax(action_values) p = np.full(action_values.shape, epsilon / len(action_values)) p[best_action] += (1 - epsilon) return sum(action_values * p) ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default._**Notes for myself**: in first trial, I used epsilon_init of 1.0. However the model failed to pass the test after 10k episodes. After adjusting epsilon_init to 0.005, it was able to learn successfully_ ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 35, 1, epsilon_init=0.0005) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_action(Q_s, epsilon, n): actions = np.arange(n) p = np.ones(n) * epsilon/n best_action = np.argmax(Q_s) p[best_action] += 1 - epsilon return np.random.choice(actions, p=p) def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function ep = 1/i_episode done = False state = env.reset() action = get_action(Q[state], ep, env.action_space.n) while not done: new_state, reward, done, _ = env.step(action) if state in Q else env.action_space.sample() new_action = get_action(Q[new_state], ep, env.action_space.n) old_q = Q[state][action] target_reward = reward + gamma*Q[new_state][new_action] if done: target_reward = reward Q[state][action] = old_q + alpha*(target_reward - old_q) action = new_action state = new_state return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]``` Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = 1/i_episode done = False state = env.reset() while not done: action = get_action(Q[state], eps, env.action_space.n) new_state, reward, done, _ = env.step(action) target_reward = max(Q[new_state]) Q[state][action] += alpha*(reward + gamma*target_reward - Q[state][action]) state = new_state return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = 1/i_episode state = env.reset() done = False while not done: action = get_action(Q[state], eps, env.nA) new_state, reward, done, _ = env.step(action) if state in Q else env.action_space.sample() pi_a = np.ones(env.nA) * eps/env.nA pi_a[np.argmax(Q[new_state])] += 1 - eps target_reward = reward + gamma * np.dot(pi_a, Q[new_state]) Q[state][action] += alpha * (target_reward - Q[state][action]) state = new_state return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 50000, 0.01) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 50000/50000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') env.render() ###Output o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o x C C C C C C C C C C T ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function. ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_value_TD(Q, next_state, next_action = None, TD_type = "sarsa", nA = None, epsilon = None): """Returns the value from the Q table based on the type of TD control algorithm""" if(TD_type == "sarsa"): return Q[next_state][next_action] if next_action is not None else 0 elif(TD_type == "Q"): return np.max(Q[next_state]) elif(TD_type == "ex_s"): policy_s = np.ones(nA) * epsilon / nA policy_s[np.argmax(Q[next_state])] = 1 - epsilon + (epsilon / nA) return np.dot(Q[next_state], policy_s) else: raise ValueError("Invalid TD_type specified") def get_action(Q, state, eps): """ Gets the action following epsilon greedy policy. Parameters: 1. Q: Q-table. 2. state: current state in the episode. 3. eps: epsilon value following GLIE conditions. Returns: - actions: the action to take based on choosing greedy action or random action """ if(np.random.random() > eps):#exploit return np.argmax(Q[state]) else: return np.random.choice(np.arange(env.action_space.n))#explore def generate_episode(env, Q, values, TD_type = "sarsa"): """ Generates an episode & updates the Q_table along the way. Parameters: 1. env: instance of cliffwalking env. 2. Q: existing Q-table 3. values: tuple of gamma, alpha, & epsilon values. 4. TD_type: type of TD control to use. default = 'SARSA'. Options: {sarsa, Q, ex_s}. Accept as str. Returns: - Q : Updated Q table. - steps_to_completion: #steps it took to complete an episode. - total_reward: total reward accumulated during the episode. """ state = env.reset() #initial state = 36 gamma, alpha, epsilon = values #extract information nA = env.action_space.n #action space size. steps_to_completion:int = 0 #number of steps taken to task completion in one episode total_reward:int = 0 #total reward accumulated in an episode while True: action = get_action(Q, state, epsilon)#choose between greedy or equiprobable action next_state, reward, done,_ = env.step(action)#step into the episode value = Q[state][action]#get current value next_action = get_action(Q, next_state, epsilon)#choose between greedy or equiprobable action for next state #SARSA: Q(S0, A0) --> Q(S0, A0) + å(R1 + gamma(Q(S1, A1) - Q(S0, A0))) value += alpha*(reward + gamma*(get_value_TD(Q, next_state, next_action, TD_type, nA, epsilon) - value)) Q[state][action] = value #update Q-table state = next_state #update state steps_to_completion += 1 #increment number of steps to episode completion total_reward += reward #increment reward counter if(done): break return Q, steps_to_completion, total_reward def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor steps_arr = [] reward_arr = [] # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 1.0/i_episode #impose GLIE conditions Q_updated, steps, reward = generate_episode(env, Q, (gamma, alpha, epsilon))#simulate 1 episode. Q = Q_updated #update value for Q-table #performance monitor steps_arr.append(steps)#append #steps for current episode (should decrease as num_episode -> ∞) reward_arr.append(reward)#should approach optimal reward as num_episode -> ∞ #convert performance metrics to nd.array steps_arr = np.array(steps_arr) reward_arr = np.array(reward_arr) plt.plot(steps_arr, label="steps")#plot steps plt.plot(reward_arr, label="reward")#plot reward plt.xlabel("Number of episodes") plt.ylabel("Continuous value") plt.ylim(-300, 300)#set y-axis boundaries plt.legend(loc="upper right")#display legend plt.show() print(f"Average Reward: {reward_arr.mean()}\nAverage Steps to completion: {steps_arr.mean()}") return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor steps_arr = [] reward_arr = [] # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 1.0/i_episode #impose GLIE conditions Q_updated, steps, reward = generate_episode(env, Q, (gamma, alpha, epsilon), TD_type="Q")#simulate 1 episode. Q = Q_updated #update value for Q-table #performance monitor steps_arr.append(steps)#append #steps for current episode (should decrease as num_episode -> ∞) reward_arr.append(reward)#should approach optimal reward as num_episode -> ∞ #convert performance metrics to nd.array steps_arr = np.array(steps_arr) reward_arr = np.array(reward_arr) plt.plot(steps_arr, label="steps")#plot steps plt.plot(reward_arr, label="reward")#plot reward plt.xlabel("Number of episodes") plt.ylabel("Continuous value") plt.ylim(-300, 300)#set y-axis boundaries plt.legend(loc="upper right")#display legend plt.show() print(f"Average Reward: {reward_arr.mean()}\nAverage Steps to completion: {steps_arr.mean()}") return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor steps_arr = [] reward_arr = [] # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 0.005 #impose GLIE conditions Q_updated, steps, reward = generate_episode(env, Q,(gamma, alpha, epsilon), TD_type="ex_s")#simulate 1 episode Q = Q_updated #update value for Q-table #performance monitor steps_arr.append(steps)#append #steps for current episode (should decrease as num_episode -> ∞) reward_arr.append(reward)#should approach optimal reward as num_episode -> ∞ #convert performance metrics to nd.array steps_arr = np.array(steps_arr) reward_arr = np.array(reward_arr) plt.plot(steps_arr, label="steps")#plot steps plt.plot(reward_arr, label="reward")#plot reward plt.xlabel("Number of episodes") plt.ylabel("Continuous value") plt.ylim(-300, 300)#set y-axis boundaries plt.legend(loc="upper right")#display legend plt.show() print(f"Average Reward: {reward_arr.mean()}\nAverage Steps to completion: {steps_arr.mean()}") print(f"Max Reward: {reward_arr.max()}") return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def sarsa(env, num_episodes, alpha, gamma=1.0, epsilon=0.8): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon *= 0.99 state = env.reset() done = False i = 0 while not done and i < 100: if random.random() > epsilon: action = np.argmax(Q[state]) else: action = env.action_space.sample() next_state, reward, done, _ = env.step(action) next_action = np.argmax(Q[next_state]) Q[state][action] += alpha * ((reward + gamma * Q[next_state][next_action]) - Q[state][action]) state = next_state i += 1 return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0, epsilon=0.8): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon *= 0.99 state = env.reset() done = False i = 0 while not done and i < 100: if random.random() > epsilon: action = np.argmax(Q[state]) else: action = env.action_space.sample() next_state, reward, done, _ = env.step(action) Q[state][action] += alpha * ((reward + gamma * max(Q[next_state])) - Q[state][action]) state = next_state i += 1 return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0, epsilon=0.8): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon *= 0.99 state = env.reset() done = False i = 0 while not done and i < 100: policy = [1 - (env.action_space.n - 1) * (epsilon / (env.action_space.n)) if i == np.argmax(Q[state]) else epsilon / env.action_space.n for i in range(env.action_space.n)] action = np.random.choice(env.action_space.n, 1, p=policy)[0] next_state, reward, done, _ = env.step(action) Q[state][action] += alpha * ((reward + gamma * max(Q[next_state])) - Q[state][action]) state = next_state i += 1 return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function. ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def sample_action(Q, state, epsilon): action_values = Q[state] nA = len(action_values) p_each = epsilon / nA action_max_value = np.argmax(action_values) probs = np.where([i == action_max_value for i in range(nA)], 1 - epsilon + p_each, p_each) sampled = np.random.choice(np.arange(nA), p=probs) return sampled def update_Q_Sarsa(Q, state, action, next_state, next_action, reward, alpha, gamma): Q[state][action] += alpha * (reward + gamma * Q[next_state][next_action] - Q[state][action]) def sarsa(env, num_episodes, alpha, gamma=1.0): Q = defaultdict(lambda: np.zeros(env.nA)) for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 1 / i_episode state = env.reset() action = sample_action(Q, state, epsilon) next_state, reward, done, info = env.step(action) while True: next_action = sample_action(Q, next_state, epsilon) update_Q_Sarsa(Q, state, action, next_state, next_action, reward, alpha, gamma) if done: break state, action = next_state, next_action next_state, reward, done, info = env.step(action) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_q_learning(Q, state, action, next_state, reward, alpha, gamma): Q[state][action] += alpha * (reward + gamma * np.max(Q[next_state]) - Q[state][action]) def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 1 / i_episode state = env.reset() action = sample_action(Q, state, epsilon) next_state, reward, done, info = env.step(action) while True: update_Q_q_learning(Q, state, action, next_state, reward, alpha, gamma) if done: break state = next_state action = sample_action(Q, state, epsilon) next_state, reward, done, info = env.step(action) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expected_sarsa(Q, state, action, next_state, reward, alpha, gamma, epsilon): nA = len(Q[next_state]) action_max_value = np.argmax(Q[next_state]) p_each = epsilon / nA expected_value = np.dot( np.where([i == action_max_value for i in range(nA)], 1 - epsilon + p_each, p_each), Q[next_state]) Q[state][action] += alpha * (reward + gamma * expected_value - Q[state][action]) def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 0.005 state = env.reset() while True: action = sample_action(Q, state, epsilon) next_state, reward, done, info = env.step(action) update_Q_expected_sarsa(Q, state, action, next_state, reward, alpha, gamma, epsilon) if done: break state = next_state return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function. ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_greedy_action(Q, state): # if there are two or more actions for which Q[s][a] is maximized, choose uniformly between them greedy_actions = np.argwhere(Q[state] == np.amax(Q[state])) greedy_actions = greedy_actions.flatten() return np.random.choice(greedy_actions) def get_random_action(Q, state): return np.random.choice(np.arange(Q[state].size)) def eps_greedy_policy(Q, state, eps): return get_random_action(Q, state) if np.random.uniform() <= eps \ else get_greedy_action(Q, state) def sarsa(env, num_episodes, alpha, gamma=1.0, eps=1, final_eps=0.1, stop_eps_after=0.5): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # eps will decrease linearly and reach final_eps in episode stop_eps_at_episode final_eps = min(eps, final_eps) stop_eps_at_episode = num_episodes * stop_eps_after - 1 eps_delta = (eps - final_eps) / stop_eps_at_episode # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() S_t = env.reset() # get initial state S_t A_t = eps_greedy_policy(Q, S_t, eps) # choose initial action A_t while True: S_t1, R_t1, done, _ = env.step(A_t) # execute A_t, get R_t+1, S_t+1 if done: # if the episode is completed, update Q[S_t][A_t] using an estimated return of zero Q[S_t][A_t] += alpha * (R_t1 - Q[S_t][A_t]) break A_t1 = eps_greedy_policy(Q, S_t1, eps) # choose action A_t+1 # update Q[S_t][A_t] using the estimated return from Q[S_t+1][A_t+1] Q[S_t][A_t] += alpha * (R_t1 + gamma * Q[S_t1][A_t1] - Q[S_t][A_t]) S_t = S_t1 A_t = A_t1 eps -= eps_delta return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01, eps=0.01) # eps=.1 safe path, eps = .01 optimal path # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0, eps=1, final_eps=0.1, stop_eps_after=0.5): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # eps will decrease linearly and reach final_eps in episode stop_eps_at_episode stop_eps_at_episode = num_episodes * stop_eps_after - 1 eps_delta = (eps - final_eps) / stop_eps_at_episode # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() S_t = env.reset() # get initial state S_t done = False while not done: A_t = eps_greedy_policy(Q, S_t, eps) # choose action A_t according to the behaviour policy S_t1, R_t1, done, _ = env.step(A_t) # execute A_t, get R_t+1, S_t+1 A_max = get_greedy_action(Q, S_t1) # choose A_max according to the target policy # update Q[S_t][A_t] using the estimated return from Q[S_t+1][A_max] Q[S_t][A_t] += alpha * (R_t1 + gamma * Q[S_t1][A_max] - Q[S_t][A_t]) S_t = S_t1 eps -= eps_delta return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01, eps=0.1) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_estimated_return_for_eps_greedy_policy(Q, state, eps): prob = np.ones(Q[state].shape) * (eps / Q[state].size) # pi(a|s) = eps/|A(s)| else prob[np.argmax(Q[state])] = 1 - eps + eps / Q[state].size # pi(a|s) = 1 - eps + eps/|A(s)| for greedy return np.dot(prob, Q[state]) def expected_sarsa(env, num_episodes, alpha, gamma=1.0, eps=1, final_eps=0.1, stop_eps_after=0.5): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # eps will decrease linearly and reach final_eps in episode stop_eps_at_episode stop_eps_at_episode = num_episodes * stop_eps_after - 1 eps_delta = (eps - final_eps) / stop_eps_at_episode # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() S_t = env.reset() # get initial state S_t done = False while not done: A_t = eps_greedy_policy(Q, S_t, eps) # choose action A_t according to the behaviour policy S_t1, R_t1, done, _ = env.step(A_t) # execute A_t, get R_t+1, S_t+1 G = get_estimated_return_for_eps_greedy_policy(Q, S_t1, eps) Q[S_t][A_t] += alpha * (R_t1 + gamma * G - Q[S_t][A_t]) # update Q[S_t][A_t] using G S_t = S_t1 eps -= eps_delta return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 0.5, eps=0.01, final_eps=0.001) # higher alpha but lower eps # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function. ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_prob(Q_s, eps, nA): prob = np.ones(nA)*eps/nA prob[np.argmax(Q_s)] += 1.-eps return prob def epsilon_greedy(env, Q, state, eps): return np.random.choice(np.arange(env.nA), p=get_prob(Q[state], eps, env.nA)) \ if state in Q else env.action_space.sample() def update_Q(Q0, Q1, R1, alpha, gamma): Q0 = Q0 + alpha*(R1 + gamma*Q1 - Q0) return Q0 def sarsa(env, num_episodes, alpha, gamma=1.0, eps=1.0, decay=0.99999, min_eps=0.05): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = eps / i_episode state = env.reset() action = epsilon_greedy(env, Q, state, eps) next_state, reward, done, info = env.step(action) prev_state, prev_action, prev_reward, state = state, action, reward, next_state while True: action = epsilon_greedy(env, Q, state, eps) next_state, reward, done, info = env.step(action) Q0, Q1, R1 = Q[prev_state][prev_action], Q[state][action], prev_reward Q[prev_state][prev_action] = update_Q(Q0, Q1, R1, alpha, gamma) if done: Q0, Q1, R1 = Q[state][action], 0, reward Q[state][action] = update_Q(Q0, Q1, R1, alpha, gamma) break prev_state, prev_action, prev_reward, state = state, action, reward, next_state return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_max_Q_from_next_state(Q_s): return max(Q_s) def get_prob(env, Q_s, eps): prob = np.ones(env.nA)*eps/env.nA prob[np.argmax(Q_s)] += 1.-eps return prob def epsilon_greedy(env, Q, state, eps): return np.random.choice(np.arange(env.nA), p=get_prob(env, Q[state], eps)) \ if state in Q else env.action_space.sample() def update_Q(Q0, Q1, R0, alpha, gamma): _Q0 = Q0 + alpha*(R0 + gamma*Q1 - Q0) return _Q0 def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = 1./i_episode state = env.reset() action = epsilon_greedy(env, Q, state, eps) next_state, reward, done, info = env.step(action) prev_state, prev_action, prev_reward, state = state, action, reward, next_state while True: action = epsilon_greedy(env, Q, state, eps) next_state, reward, done, info = env.step(action) Q0, Q1, R0 = Q[prev_state][prev_action], get_max_Q_from_next_state(Q[state]), prev_reward Q[prev_state][prev_action] = update_Q(Q0, Q1, R0, alpha, gamma) prev_state, prev_action, prev_reward, state = state, action, reward, next_state if done: Q0, Q1, R0 = Q[prev_state][prev_action], 0, prev_reward Q[prev_state][prev_action] = update_Q(Q0, Q1, R0, alpha, gamma) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_expected_Q(env, Q_s, eps): prob = get_prob(env, Q_s, eps) return np.dot(prob, Q_s) def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = 0.01/i_episode state = env.reset() action = epsilon_greedy(env, Q, state, eps) next_state, reward, done, info = env.step(action) prev_state, prev_action, prev_reward, state = state, action, reward, next_state while True: action = epsilon_greedy(env, Q, state, eps) next_state, reward, done, info = env.step(action) Q0, Q1, R0 = Q[prev_state][prev_action], get_expected_Q(env, Q[state], eps), prev_reward Q[prev_state][prev_action] = update_Q(Q0, Q1, R0, alpha, gamma) prev_state, prev_action, prev_reward, state = state, action, reward, next_state if done: Q0, Q1, R0 = Q[prev_state][prev_action], 0, prev_reward Q[prev_state][prev_action] = update_Q(Q0, Q1, R0, alpha, gamma) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code # !conda install seaborn -y import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy_action(Q_s, epsilon): exploitation = np.random.choice(np.arange(2), p=[epsilon, 1-epsilon]) if exploitation: a = np.argmax(Q_s) else: a = np.random.choice(np.arange(4)) return a def update_Q(Q, s, a, r_, s_, a_, alpha, gamma): Q[s][a] += alpha*(r_ + gamma*Q[s_][a_] - Q[s][a]) return Q import math def sarsa(env, num_episodes, alpha, gamma=1.0, max_eps=1.0, min_eps=0.05): eps_gen = (e for e in np.linspace(max_eps, min_eps, num_episodes)) # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{} | eps: {}".format(i_episode, num_episodes, eps), end="") sys.stdout.flush() ## TODO: complete the function # initialize epsilon, state and action eps = next(eps_gen) s = env.reset() a = epsilon_greedy_action(Q[s], eps) visit = defaultdict(lambda: np.zeros(env.nA)) while True: s_, r_, terminal, _ = env.step(a) a_ = epsilon_greedy_action(Q[s], eps) if not visit[s][a]: # first visit implementation Q = update_Q(Q, s, a, r_, s_, a_, alpha, gamma) visit[s][a] = 1 s = s_ a = a_ if terminal: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 10000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 10000/10000 | eps: 0.05009500950095014 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output _____no_output_____ ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_action(Q, state, nA, epsilon): if np.random.random() > epsilon: return np.argmax(Q[state]) # choose action with highest expected reward else: return np.random.choice(np.arange(nA)) def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): Q_next_state_next_action = 0 if next_state is not None: Q_next_state_next_action = Q[next_state][next_action] alternative_estimate = reward + (gamma * Q_next_state_next_action) current_estimate = Q[state][action] Q[state][action] += alpha * (alternative_estimate - current_estimate) def plot_performance(num_episodes, avg_scores, plot_every): plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) avg_scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 epsilon = 1.0 / i_episode state = env.reset() action = get_action(Q, state, nA, epsilon) while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = get_action(Q, next_state, nA, epsilon) update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state, next_action) state = next_state action = next_action else: update_Q_sarsa(alpha, gamma, Q, state, action, reward, None, None) tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plot_performance(num_episodes, avg_scores, plot_every) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state): current_estimate = Q[state][action] Q_next_state = np.max(Q[next_state]) alternative_estimate = reward + (gamma * Q_next_state) Q[state][action] += alpha * (alternative_estimate - current_estimate) def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) avg_scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 epsilon = 1 / i_episode state = env.reset() while True: action = get_action(Q, state, nA, epsilon) next_state, reward, done, info = env.step(action) update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state) score += reward state = next_state if done: tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plot_performance(num_episodes, avg_scores, plot_every) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) #Q_sarsamax = q_learning(env, 1, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_probabilities(Q_state, nA, epsilon): probs = np.ones(nA) * epsilon / nA probs[np.argmax(Q_state)] += 1 - epsilon return probs def update_Q_expected_sarsa(Q, state, action, reward, next_state, nA, probs, alpha, gamma): current_estimate = Q[state][action] sum_of_expected_values = sum([probs[action] * Q[next_state][action] for action in range(nA)]) alternative_estimate = reward + (gamma * sum_of_expected_values) Q[state][action] += alpha * (alternative_estimate - current_estimate) def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) avg_scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 epsilon = 0.005 state = env.reset() while True: action = get_action(Q, state, nA, epsilon) next_state, reward, done, info = env.step(action) probs = get_probabilities(Q[next_state], nA, epsilon) update_Q_expected_sarsa(Q, state, action, reward, next_state, nA, probs, alpha, gamma) score += reward state = next_state if done: tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plot_performance(num_episodes, avg_scores, plot_every) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values from tqdm import tqdm_notebook as tqdm ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_epsilon_greedy_action(Q, nA, state, epsilon): if random.random() > epsilon: return np.argmax(Q[state]) else: return random.choice(np.arange(nA)) def temporal_difference(env, num_episodes, alpha, func, gamma=1.0): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor # loop over episodes for i_episode in tqdm(range(1, num_episodes+1)): epsilon = 1.0/i_episode state = env.reset() action = get_epsilon_greedy_action(Q, nA, state, epsilon) done = False while not done: next_state, reward, done, _ = env.step(action) # print(next_state, reward, done) next_action = get_epsilon_greedy_action(Q, nA, next_state, epsilon) if not done else None update(Q, state, action, reward, alpha, gamma, next_state, next_action, func) state = next_state action = next_action return Q def update(Q, state, action, reward, alpha, gamma, next_state, next_action, func): Qsa_next = func(Q[next_state], next_action) if next_action is not None else 0 target = reward + (gamma*Qsa_next) Q[state][action] += (alpha*(target - Q[state][action])) def sarsa_func(Q_s, action): return Q_s[action] ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = temporal_difference(env, 6000, .01, func=sarsa_func) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output /home/jefferson/anaconda3/envs/deep-reinforcement/lib/python3.7/site-packages/ipykernel_launcher.py:13: TqdmDeprecationWarning: This function will be removed in tqdm==5.0.0 Please use `tqdm.notebook.tqdm` instead of `tqdm.tqdm_notebook` del sys.path[0] ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code def Q_func(Q_s, action): return max(Q_s) # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = temporal_difference(env, 5000, .01, func=Q_func) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output /home/jefferson/anaconda3/envs/deep-reinforcement/lib/python3.7/site-packages/ipykernel_launcher.py:13: TqdmDeprecationWarning: This function will be removed in tqdm==5.0.0 Please use `tqdm.notebook.tqdm` instead of `tqdm.tqdm_notebook` del sys.path[0] ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code def update_expected_sarsa(Q, state, action, reward, alpha, gamma, epsilon, next_state, next_action): nA = len(Q[state]) state_policy = (np.ones(nA) * epsilon) / nA state_policy[np.argmax(Q[next_state])] = 1 - epsilon + (epsilon/nA) Qsa_next = np.dot(state_policy, Q[next_state]) target = reward + (gamma*Qsa_next) Q[state][action] += (alpha*(target - Q[state][action])) def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor # loop over episodes for i_episode in tqdm(range(1, num_episodes+1)): epsilon = 1.0/i_episode state = env.reset() action = get_epsilon_greedy_action(Q, nA, state, epsilon) done = False while not done: next_state, reward, done, _ = env.step(action) # print(next_state, reward, done) next_action = get_epsilon_greedy_action(Q, nA, next_state, epsilon) if not done else None update_expected_sarsa(Q, state, action, reward, alpha, gamma, epsilon, next_state, next_action) state = next_state action = next_action return Q # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 0.01) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output /home/jefferson/anaconda3/envs/deep-reinforcement/lib/python3.7/site-packages/ipykernel_launcher.py:15: TqdmDeprecationWarning: This function will be removed in tqdm==5.0.0 Please use `tqdm.notebook.tqdm` instead of `tqdm.tqdm_notebook` from ipykernel import kernelapp as app ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code class EpsilonGreedyPolicy(): def __init__(self, Q, action_space, epsilon): self.Q = Q # Action-value function self.actions = action_space self.epsilon = epsilon def get_action(self, state): greedy_choice = np.argmax(self.Q[state]) random_choice = np.random.choice(self.actions) epsilon_greedy_choice = np.random.choice( [greedy_choice, random_choice], p = [1-self.epsilon, self.epsilon] ) return epsilon_greedy_choice def sarsa(env, num_episodes, alpha, gamma=1.0, eps_decay=0.99, eps_min=0.05): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) epsilon = 1 # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # epsilon = 1 - 0.9 * (i_episode - 1) / num_episodes # epsilon = max(epsilon*eps_decay, eps_min) epsilon = 1.0 / i_episode policy = EpsilonGreedyPolicy(Q, range(env.action_space.n), epsilon) state = env.reset() # state = np.random.choice(range(env.observation_space.n)) # Try a random starting point? action = policy.get_action(state) while True: # Perform A next_state, reward, done, info = env.step(action) # Choose A' next_action = policy.get_action(next_state) # Update Q new_Q = Q[next_state][next_action] if not done else 0 Q[state][action] += alpha * ((reward + gamma * new_Q) - Q[state][action]) policy = EpsilonGreedyPolicy(Q, range(env.action_space.n), epsilon) # Update policy state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) epsilon = 1 # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 1.0 / i_episode policy = EpsilonGreedyPolicy(Q, range(env.action_space.n), epsilon) state = env.reset() while True: # Choose and Perform A action = policy.get_action(state) next_state, reward, done, info = env.step(action) # Update Q new_Q = max(Q[next_state]) if not done else 0 Q[state][action] += alpha * ((reward + gamma * new_Q) - Q[state][action]) policy = EpsilonGreedyPolicy(Q, range(env.action_space.n), epsilon) # Update policy state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) epsilon = 1 actions = range(env.action_space.n) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # epsilon = 1.0 / i_episode epsilon = 0.005 # Constant epsilon for expectation policy = EpsilonGreedyPolicy(Q, range(env.action_space.n), epsilon) state = env.reset() while True: # Choose and Perform A action = policy.get_action(state) next_state, reward, done, info = env.step(action) # Calculate expected return next_G = 0 if not done: next_G = epsilon * sum([Q[next_state][action] for action in actions]) / len(actions) + (1 - epsilon) * max(Q[next_state]) # Update Q Q[state][action] += alpha * ((reward + gamma * next_G) - Q[state][action]) policy = EpsilonGreedyPolicy(Q, actions, epsilon) # Update policy state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym from collections import deque import random import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt import check_test from plot_utils import plot_values %matplotlib inline ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(Q, state, epsilon, n_actions): # choose action according to epsilon-greedy policy if random.random() > epsilon: return np.argmax(Q[state]) else: # return env.action_space.sample() return random.choice(np.arange(env.action_space.n)) def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step if next_state is not None: Qsa_next = Q[next_state][next_action] else: Qsa_next = 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.9999, eps_min=0.001, plot_every=100): n_actions = env.action_space.n epsilon = eps_start Q = defaultdict(lambda: np.zeros(n_actions)) # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(num_episodes): # monitor progress if i_episode % plot_every == 0: print(f"\rEpisode {i_episode}/{num_episodes}, average score: {np.mean(tmp_scores) if len(tmp_scores) > 0 else 0}, epsilon: {epsilon}", end="") sys.stdout.flush() score = 0 state = env.reset() epsilon = 1.0 / (i_episode + 1) action = epsilon_greedy(Q, state, epsilon, n_actions) while True: next_state, reward, done, _ = env.step(action) score += reward if not done: next_action = epsilon_greedy(Q, next_state, epsilon, n_actions) Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state, next_action) state = next_state action = next_action if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0, num_episodes, len(avg_scores), endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 50000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 49900/50000, average score: -13.0, epsilon: 2.004008016032064e-0555 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def greedy(Q, state): # choose action according to greedy policy return np.max(Q[state]) def epsilon_greedy(Q, state, epsilon, n_actions): # choose action according to epsilon-greedy policy if random.random() > epsilon: return np.argmax(Q[state]) else: return random.choice(np.arange(n_actions)) def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step if next_state is not None: Qsa_next = greedy(Q, next_state) else: Qsa_next = 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + alpha * (target - current) # get updated value return new_value def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, plot_every=100): n_actions = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(n_actions)) # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(num_episodes): score = 0 state = env.reset() epsilon = 1.0 / (i_episode + 1) action = epsilon_greedy(Q, state, epsilon, n_actions) # monitor progress if i_episode % 100 == 0: print(f"\rEpisode {i_episode}/{num_episodes}, average score: {np.mean(tmp_scores) if len(tmp_scores) > 0 else 0}, epsilon: {epsilon}", end="") sys.stdout.flush() while True: next_state, reward, done, _ = env.step(action) score += reward if not done: next_action = epsilon_greedy(Q, next_state, epsilon, n_actions) Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state) state = next_state action = next_action if done: Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0, num_episodes, len(avg_scores), endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 4900/5000, average score: -13.0, epsilon: 0.000204039991838400333 ###Markdown def epsilon_greedy(Q, state, epsilon, n_actions): choose action according to epsilon-greedy policy if random.random() > epsilon: return np.argmax(Q[state]) else: return env.action_space.sample() return random.choice(np.arange(env.action_space.n)) def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] estimate in Q-table (for current state, action pair) get value of state, action pair at next time step if next_state is not None: Qsa_next = Q[next_state][next_action] else: Qsa_next = 0 target = reward + (gamma * Qsa_next) construct TD target new_value = current + (alpha * (target - current)) get updated value return new_value Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(Q, state, epsilon, n_actions): # choose action according to epsilon-greedy policy if random.random() > epsilon: return np.argmax(Q[state]) else: return random.choice(np.arange(n_actions)) def update_Q_expectedsarsa(alpha, gamma, Q, state, action, reward, n_actions, epsilon, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) policy_s = np.ones(n_actions) * epsilon / n_actions # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - epsilon + (epsilon / n_actions) # greedy action Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step target = reward + (gamma * Qsa_next) # construct TD target new_value = current + alpha * (target - current) # get updated value return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): n_actions = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(n_actions)) # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(num_episodes): score = 0 state = env.reset() epsilon = 0.1 / (i_episode + 1) action = epsilon_greedy(Q, state, epsilon, n_actions) # monitor progress if i_episode % 100 == 0: print(f"\rEpisode {i_episode}/{num_episodes}, average score: {np.mean(tmp_scores) if len(tmp_scores) > 0 else 0}, epsilon: {epsilon}", end="") sys.stdout.flush() while True: next_state, reward, done, _ = env.step(action) score += reward if not done: next_action = epsilon_greedy(Q, next_state, epsilon, n_actions) Q[state][action] = update_Q_expectedsarsa(alpha, gamma, Q, state, action, reward, n_actions, epsilon, next_state) state = next_state action = next_action if done: Q[state][action] = update_Q_expectedsarsa(alpha, gamma, Q, state, action, reward, n_actions, epsilon) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0, num_episodes, len(avg_scores), endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 500, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 400/500, average score: -13.02, epsilon: 0.0002493765586034913 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values from gym import envs print(envs.registry.all()) ###Output dict_values([EnvSpec(Copy-v0), EnvSpec(RepeatCopy-v0), EnvSpec(ReversedAddition-v0), EnvSpec(ReversedAddition3-v0), EnvSpec(DuplicatedInput-v0), EnvSpec(Reverse-v0), EnvSpec(CartPole-v0), EnvSpec(CartPole-v1), EnvSpec(MountainCar-v0), EnvSpec(MountainCarContinuous-v0), EnvSpec(Pendulum-v0), EnvSpec(Acrobot-v1), EnvSpec(LunarLander-v2), EnvSpec(LunarLanderContinuous-v2), EnvSpec(BipedalWalker-v2), EnvSpec(BipedalWalkerHardcore-v2), EnvSpec(CarRacing-v0), EnvSpec(Blackjack-v0), EnvSpec(FrozenLake-v0), EnvSpec(FrozenLake8x8-v0), EnvSpec(NChain-v0), EnvSpec(Roulette-v0), EnvSpec(Taxi-v2), EnvSpec(GuessingGame-v0), EnvSpec(HotterColder-v0), EnvSpec(CliffWalking-v0), EnvSpec(Reacher-v1), EnvSpec(InvertedPendulum-v1), EnvSpec(InvertedDoublePendulum-v1), EnvSpec(HalfCheetah-v1), EnvSpec(Hopper-v1), EnvSpec(Swimmer-v1), EnvSpec(Walker2d-v1), EnvSpec(Ant-v1), EnvSpec(Humanoid-v1), EnvSpec(HumanoidStandup-v1), EnvSpec(AirRaid-v0), EnvSpec(AirRaid-v3), EnvSpec(AirRaidDeterministic-v0), EnvSpec(AirRaidDeterministic-v3), EnvSpec(AirRaidNoFrameskip-v0), EnvSpec(AirRaidNoFrameskip-v3), EnvSpec(AirRaid-ram-v0), EnvSpec(AirRaid-ram-v3), EnvSpec(AirRaid-ramDeterministic-v0), EnvSpec(AirRaid-ramDeterministic-v3), EnvSpec(AirRaid-ramNoFrameskip-v0), EnvSpec(AirRaid-ramNoFrameskip-v3), EnvSpec(Alien-v0), EnvSpec(Alien-v3), EnvSpec(AlienDeterministic-v0), EnvSpec(AlienDeterministic-v3), EnvSpec(AlienNoFrameskip-v0), EnvSpec(AlienNoFrameskip-v3), EnvSpec(Alien-ram-v0), EnvSpec(Alien-ram-v3), EnvSpec(Alien-ramDeterministic-v0), EnvSpec(Alien-ramDeterministic-v3), EnvSpec(Alien-ramNoFrameskip-v0), EnvSpec(Alien-ramNoFrameskip-v3), EnvSpec(Amidar-v0), EnvSpec(Amidar-v3), EnvSpec(AmidarDeterministic-v0), EnvSpec(AmidarDeterministic-v3), EnvSpec(AmidarNoFrameskip-v0), EnvSpec(AmidarNoFrameskip-v3), EnvSpec(Amidar-ram-v0), EnvSpec(Amidar-ram-v3), EnvSpec(Amidar-ramDeterministic-v0), EnvSpec(Amidar-ramDeterministic-v3), EnvSpec(Amidar-ramNoFrameskip-v0), EnvSpec(Amidar-ramNoFrameskip-v3), EnvSpec(Assault-v0), EnvSpec(Assault-v3), EnvSpec(AssaultDeterministic-v0), EnvSpec(AssaultDeterministic-v3), EnvSpec(AssaultNoFrameskip-v0), EnvSpec(AssaultNoFrameskip-v3), EnvSpec(Assault-ram-v0), EnvSpec(Assault-ram-v3), EnvSpec(Assault-ramDeterministic-v0), EnvSpec(Assault-ramDeterministic-v3), EnvSpec(Assault-ramNoFrameskip-v0), EnvSpec(Assault-ramNoFrameskip-v3), EnvSpec(Asterix-v0), EnvSpec(Asterix-v3), EnvSpec(AsterixDeterministic-v0), EnvSpec(AsterixDeterministic-v3), EnvSpec(AsterixNoFrameskip-v0), EnvSpec(AsterixNoFrameskip-v3), EnvSpec(Asterix-ram-v0), EnvSpec(Asterix-ram-v3), EnvSpec(Asterix-ramDeterministic-v0), EnvSpec(Asterix-ramDeterministic-v3), EnvSpec(Asterix-ramNoFrameskip-v0), EnvSpec(Asterix-ramNoFrameskip-v3), EnvSpec(Asteroids-v0), EnvSpec(Asteroids-v3), EnvSpec(AsteroidsDeterministic-v0), EnvSpec(AsteroidsDeterministic-v3), EnvSpec(AsteroidsNoFrameskip-v0), EnvSpec(AsteroidsNoFrameskip-v3), EnvSpec(Asteroids-ram-v0), EnvSpec(Asteroids-ram-v3), EnvSpec(Asteroids-ramDeterministic-v0), EnvSpec(Asteroids-ramDeterministic-v3), EnvSpec(Asteroids-ramNoFrameskip-v0), EnvSpec(Asteroids-ramNoFrameskip-v3), EnvSpec(Atlantis-v0), EnvSpec(Atlantis-v3), EnvSpec(AtlantisDeterministic-v0), EnvSpec(AtlantisDeterministic-v3), EnvSpec(AtlantisNoFrameskip-v0), EnvSpec(AtlantisNoFrameskip-v3), EnvSpec(Atlantis-ram-v0), EnvSpec(Atlantis-ram-v3), EnvSpec(Atlantis-ramDeterministic-v0), EnvSpec(Atlantis-ramDeterministic-v3), EnvSpec(Atlantis-ramNoFrameskip-v0), EnvSpec(Atlantis-ramNoFrameskip-v3), EnvSpec(BankHeist-v0), EnvSpec(BankHeist-v3), EnvSpec(BankHeistDeterministic-v0), EnvSpec(BankHeistDeterministic-v3), EnvSpec(BankHeistNoFrameskip-v0), EnvSpec(BankHeistNoFrameskip-v3), EnvSpec(BankHeist-ram-v0), EnvSpec(BankHeist-ram-v3), EnvSpec(BankHeist-ramDeterministic-v0), EnvSpec(BankHeist-ramDeterministic-v3), EnvSpec(BankHeist-ramNoFrameskip-v0), EnvSpec(BankHeist-ramNoFrameskip-v3), EnvSpec(BattleZone-v0), EnvSpec(BattleZone-v3), EnvSpec(BattleZoneDeterministic-v0), EnvSpec(BattleZoneDeterministic-v3), EnvSpec(BattleZoneNoFrameskip-v0), EnvSpec(BattleZoneNoFrameskip-v3), EnvSpec(BattleZone-ram-v0), EnvSpec(BattleZone-ram-v3), EnvSpec(BattleZone-ramDeterministic-v0), EnvSpec(BattleZone-ramDeterministic-v3), EnvSpec(BattleZone-ramNoFrameskip-v0), EnvSpec(BattleZone-ramNoFrameskip-v3), EnvSpec(BeamRider-v0), EnvSpec(BeamRider-v3), EnvSpec(BeamRiderDeterministic-v0), EnvSpec(BeamRiderDeterministic-v3), EnvSpec(BeamRiderNoFrameskip-v0), EnvSpec(BeamRiderNoFrameskip-v3), EnvSpec(BeamRider-ram-v0), EnvSpec(BeamRider-ram-v3), EnvSpec(BeamRider-ramDeterministic-v0), EnvSpec(BeamRider-ramDeterministic-v3), EnvSpec(BeamRider-ramNoFrameskip-v0), EnvSpec(BeamRider-ramNoFrameskip-v3), EnvSpec(Berzerk-v0), EnvSpec(Berzerk-v3), EnvSpec(BerzerkDeterministic-v0), EnvSpec(BerzerkDeterministic-v3), EnvSpec(BerzerkNoFrameskip-v0), EnvSpec(BerzerkNoFrameskip-v3), EnvSpec(Berzerk-ram-v0), EnvSpec(Berzerk-ram-v3), EnvSpec(Berzerk-ramDeterministic-v0), EnvSpec(Berzerk-ramDeterministic-v3), EnvSpec(Berzerk-ramNoFrameskip-v0), EnvSpec(Berzerk-ramNoFrameskip-v3), EnvSpec(Bowling-v0), EnvSpec(Bowling-v3), EnvSpec(BowlingDeterministic-v0), EnvSpec(BowlingDeterministic-v3), EnvSpec(BowlingNoFrameskip-v0), EnvSpec(BowlingNoFrameskip-v3), EnvSpec(Bowling-ram-v0), EnvSpec(Bowling-ram-v3), EnvSpec(Bowling-ramDeterministic-v0), EnvSpec(Bowling-ramDeterministic-v3), EnvSpec(Bowling-ramNoFrameskip-v0), EnvSpec(Bowling-ramNoFrameskip-v3), EnvSpec(Boxing-v0), EnvSpec(Boxing-v3), EnvSpec(BoxingDeterministic-v0), EnvSpec(BoxingDeterministic-v3), EnvSpec(BoxingNoFrameskip-v0), EnvSpec(BoxingNoFrameskip-v3), EnvSpec(Boxing-ram-v0), EnvSpec(Boxing-ram-v3), EnvSpec(Boxing-ramDeterministic-v0), EnvSpec(Boxing-ramDeterministic-v3), EnvSpec(Boxing-ramNoFrameskip-v0), EnvSpec(Boxing-ramNoFrameskip-v3), EnvSpec(Breakout-v0), EnvSpec(Breakout-v3), EnvSpec(BreakoutDeterministic-v0), EnvSpec(BreakoutDeterministic-v3), EnvSpec(BreakoutNoFrameskip-v0), EnvSpec(BreakoutNoFrameskip-v3), EnvSpec(Breakout-ram-v0), EnvSpec(Breakout-ram-v3), EnvSpec(Breakout-ramDeterministic-v0), EnvSpec(Breakout-ramDeterministic-v3), EnvSpec(Breakout-ramNoFrameskip-v0), EnvSpec(Breakout-ramNoFrameskip-v3), EnvSpec(Carnival-v0), EnvSpec(Carnival-v3), EnvSpec(CarnivalDeterministic-v0), EnvSpec(CarnivalDeterministic-v3), EnvSpec(CarnivalNoFrameskip-v0), EnvSpec(CarnivalNoFrameskip-v3), EnvSpec(Carnival-ram-v0), EnvSpec(Carnival-ram-v3), EnvSpec(Carnival-ramDeterministic-v0), EnvSpec(Carnival-ramDeterministic-v3), EnvSpec(Carnival-ramNoFrameskip-v0), EnvSpec(Carnival-ramNoFrameskip-v3), EnvSpec(Centipede-v0), EnvSpec(Centipede-v3), EnvSpec(CentipedeDeterministic-v0), EnvSpec(CentipedeDeterministic-v3), EnvSpec(CentipedeNoFrameskip-v0), EnvSpec(CentipedeNoFrameskip-v3), EnvSpec(Centipede-ram-v0), EnvSpec(Centipede-ram-v3), EnvSpec(Centipede-ramDeterministic-v0), EnvSpec(Centipede-ramDeterministic-v3), EnvSpec(Centipede-ramNoFrameskip-v0), EnvSpec(Centipede-ramNoFrameskip-v3), EnvSpec(ChopperCommand-v0), EnvSpec(ChopperCommand-v3), EnvSpec(ChopperCommandDeterministic-v0), EnvSpec(ChopperCommandDeterministic-v3), EnvSpec(ChopperCommandNoFrameskip-v0), EnvSpec(ChopperCommandNoFrameskip-v3), EnvSpec(ChopperCommand-ram-v0), EnvSpec(ChopperCommand-ram-v3), EnvSpec(ChopperCommand-ramDeterministic-v0), EnvSpec(ChopperCommand-ramDeterministic-v3), EnvSpec(ChopperCommand-ramNoFrameskip-v0), EnvSpec(ChopperCommand-ramNoFrameskip-v3), EnvSpec(CrazyClimber-v0), EnvSpec(CrazyClimber-v3), EnvSpec(CrazyClimberDeterministic-v0), EnvSpec(CrazyClimberDeterministic-v3), EnvSpec(CrazyClimberNoFrameskip-v0), EnvSpec(CrazyClimberNoFrameskip-v3), EnvSpec(CrazyClimber-ram-v0), EnvSpec(CrazyClimber-ram-v3), EnvSpec(CrazyClimber-ramDeterministic-v0), EnvSpec(CrazyClimber-ramDeterministic-v3), EnvSpec(CrazyClimber-ramNoFrameskip-v0), EnvSpec(CrazyClimber-ramNoFrameskip-v3), EnvSpec(DemonAttack-v0), EnvSpec(DemonAttack-v3), EnvSpec(DemonAttackDeterministic-v0), EnvSpec(DemonAttackDeterministic-v3), EnvSpec(DemonAttackNoFrameskip-v0), EnvSpec(DemonAttackNoFrameskip-v3), EnvSpec(DemonAttack-ram-v0), EnvSpec(DemonAttack-ram-v3), EnvSpec(DemonAttack-ramDeterministic-v0), EnvSpec(DemonAttack-ramDeterministic-v3), EnvSpec(DemonAttack-ramNoFrameskip-v0), EnvSpec(DemonAttack-ramNoFrameskip-v3), EnvSpec(DoubleDunk-v0), EnvSpec(DoubleDunk-v3), EnvSpec(DoubleDunkDeterministic-v0), EnvSpec(DoubleDunkDeterministic-v3), EnvSpec(DoubleDunkNoFrameskip-v0), EnvSpec(DoubleDunkNoFrameskip-v3), EnvSpec(DoubleDunk-ram-v0), EnvSpec(DoubleDunk-ram-v3), EnvSpec(DoubleDunk-ramDeterministic-v0), EnvSpec(DoubleDunk-ramDeterministic-v3), EnvSpec(DoubleDunk-ramNoFrameskip-v0), EnvSpec(DoubleDunk-ramNoFrameskip-v3), EnvSpec(ElevatorAction-v0), EnvSpec(ElevatorAction-v3), EnvSpec(ElevatorActionDeterministic-v0), EnvSpec(ElevatorActionDeterministic-v3), EnvSpec(ElevatorActionNoFrameskip-v0), EnvSpec(ElevatorActionNoFrameskip-v3), EnvSpec(ElevatorAction-ram-v0), EnvSpec(ElevatorAction-ram-v3), EnvSpec(ElevatorAction-ramDeterministic-v0), EnvSpec(ElevatorAction-ramDeterministic-v3), EnvSpec(ElevatorAction-ramNoFrameskip-v0), EnvSpec(ElevatorAction-ramNoFrameskip-v3), EnvSpec(Enduro-v0), EnvSpec(Enduro-v3), EnvSpec(EnduroDeterministic-v0), EnvSpec(EnduroDeterministic-v3), EnvSpec(EnduroNoFrameskip-v0), EnvSpec(EnduroNoFrameskip-v3), EnvSpec(Enduro-ram-v0), EnvSpec(Enduro-ram-v3), EnvSpec(Enduro-ramDeterministic-v0), EnvSpec(Enduro-ramDeterministic-v3), EnvSpec(Enduro-ramNoFrameskip-v0), EnvSpec(Enduro-ramNoFrameskip-v3), EnvSpec(FishingDerby-v0), EnvSpec(FishingDerby-v3), EnvSpec(FishingDerbyDeterministic-v0), EnvSpec(FishingDerbyDeterministic-v3), EnvSpec(FishingDerbyNoFrameskip-v0), EnvSpec(FishingDerbyNoFrameskip-v3), EnvSpec(FishingDerby-ram-v0), EnvSpec(FishingDerby-ram-v3), EnvSpec(FishingDerby-ramDeterministic-v0), EnvSpec(FishingDerby-ramDeterministic-v3), EnvSpec(FishingDerby-ramNoFrameskip-v0), EnvSpec(FishingDerby-ramNoFrameskip-v3), EnvSpec(Freeway-v0), EnvSpec(Freeway-v3), EnvSpec(FreewayDeterministic-v0), EnvSpec(FreewayDeterministic-v3), EnvSpec(FreewayNoFrameskip-v0), EnvSpec(FreewayNoFrameskip-v3), EnvSpec(Freeway-ram-v0), EnvSpec(Freeway-ram-v3), EnvSpec(Freeway-ramDeterministic-v0), EnvSpec(Freeway-ramDeterministic-v3), EnvSpec(Freeway-ramNoFrameskip-v0), EnvSpec(Freeway-ramNoFrameskip-v3), EnvSpec(Frostbite-v0), EnvSpec(Frostbite-v3), EnvSpec(FrostbiteDeterministic-v0), EnvSpec(FrostbiteDeterministic-v3), EnvSpec(FrostbiteNoFrameskip-v0), EnvSpec(FrostbiteNoFrameskip-v3), EnvSpec(Frostbite-ram-v0), EnvSpec(Frostbite-ram-v3), EnvSpec(Frostbite-ramDeterministic-v0), EnvSpec(Frostbite-ramDeterministic-v3), EnvSpec(Frostbite-ramNoFrameskip-v0), EnvSpec(Frostbite-ramNoFrameskip-v3), EnvSpec(Gopher-v0), EnvSpec(Gopher-v3), EnvSpec(GopherDeterministic-v0), EnvSpec(GopherDeterministic-v3), EnvSpec(GopherNoFrameskip-v0), EnvSpec(GopherNoFrameskip-v3), EnvSpec(Gopher-ram-v0), EnvSpec(Gopher-ram-v3), EnvSpec(Gopher-ramDeterministic-v0), EnvSpec(Gopher-ramDeterministic-v3), EnvSpec(Gopher-ramNoFrameskip-v0), EnvSpec(Gopher-ramNoFrameskip-v3), EnvSpec(Gravitar-v0), EnvSpec(Gravitar-v3), EnvSpec(GravitarDeterministic-v0), EnvSpec(GravitarDeterministic-v3), EnvSpec(GravitarNoFrameskip-v0), EnvSpec(GravitarNoFrameskip-v3), EnvSpec(Gravitar-ram-v0), EnvSpec(Gravitar-ram-v3), EnvSpec(Gravitar-ramDeterministic-v0), EnvSpec(Gravitar-ramDeterministic-v3), EnvSpec(Gravitar-ramNoFrameskip-v0), EnvSpec(Gravitar-ramNoFrameskip-v3), EnvSpec(IceHockey-v0), EnvSpec(IceHockey-v3), EnvSpec(IceHockeyDeterministic-v0), EnvSpec(IceHockeyDeterministic-v3), EnvSpec(IceHockeyNoFrameskip-v0), EnvSpec(IceHockeyNoFrameskip-v3), EnvSpec(IceHockey-ram-v0), EnvSpec(IceHockey-ram-v3), EnvSpec(IceHockey-ramDeterministic-v0), EnvSpec(IceHockey-ramDeterministic-v3), EnvSpec(IceHockey-ramNoFrameskip-v0), EnvSpec(IceHockey-ramNoFrameskip-v3), EnvSpec(Jamesbond-v0), EnvSpec(Jamesbond-v3), EnvSpec(JamesbondDeterministic-v0), EnvSpec(JamesbondDeterministic-v3), EnvSpec(JamesbondNoFrameskip-v0), EnvSpec(JamesbondNoFrameskip-v3), EnvSpec(Jamesbond-ram-v0), EnvSpec(Jamesbond-ram-v3), EnvSpec(Jamesbond-ramDeterministic-v0), EnvSpec(Jamesbond-ramDeterministic-v3), EnvSpec(Jamesbond-ramNoFrameskip-v0), EnvSpec(Jamesbond-ramNoFrameskip-v3), EnvSpec(JourneyEscape-v0), EnvSpec(JourneyEscape-v3), EnvSpec(JourneyEscapeDeterministic-v0), EnvSpec(JourneyEscapeDeterministic-v3), EnvSpec(JourneyEscapeNoFrameskip-v0), EnvSpec(JourneyEscapeNoFrameskip-v3), EnvSpec(JourneyEscape-ram-v0), EnvSpec(JourneyEscape-ram-v3), EnvSpec(JourneyEscape-ramDeterministic-v0), EnvSpec(JourneyEscape-ramDeterministic-v3), EnvSpec(JourneyEscape-ramNoFrameskip-v0), EnvSpec(JourneyEscape-ramNoFrameskip-v3), EnvSpec(Kangaroo-v0), EnvSpec(Kangaroo-v3), EnvSpec(KangarooDeterministic-v0), EnvSpec(KangarooDeterministic-v3), EnvSpec(KangarooNoFrameskip-v0), EnvSpec(KangarooNoFrameskip-v3), EnvSpec(Kangaroo-ram-v0), EnvSpec(Kangaroo-ram-v3), EnvSpec(Kangaroo-ramDeterministic-v0), EnvSpec(Kangaroo-ramDeterministic-v3), EnvSpec(Kangaroo-ramNoFrameskip-v0), EnvSpec(Kangaroo-ramNoFrameskip-v3), EnvSpec(Krull-v0), EnvSpec(Krull-v3), EnvSpec(KrullDeterministic-v0), EnvSpec(KrullDeterministic-v3), EnvSpec(KrullNoFrameskip-v0), EnvSpec(KrullNoFrameskip-v3), EnvSpec(Krull-ram-v0), EnvSpec(Krull-ram-v3), EnvSpec(Krull-ramDeterministic-v0), EnvSpec(Krull-ramDeterministic-v3), EnvSpec(Krull-ramNoFrameskip-v0), EnvSpec(Krull-ramNoFrameskip-v3), EnvSpec(KungFuMaster-v0), EnvSpec(KungFuMaster-v3), EnvSpec(KungFuMasterDeterministic-v0), EnvSpec(KungFuMasterDeterministic-v3), EnvSpec(KungFuMasterNoFrameskip-v0), EnvSpec(KungFuMasterNoFrameskip-v3), EnvSpec(KungFuMaster-ram-v0), EnvSpec(KungFuMaster-ram-v3), EnvSpec(KungFuMaster-ramDeterministic-v0), EnvSpec(KungFuMaster-ramDeterministic-v3), EnvSpec(KungFuMaster-ramNoFrameskip-v0), EnvSpec(KungFuMaster-ramNoFrameskip-v3), EnvSpec(MontezumaRevenge-v0), EnvSpec(MontezumaRevenge-v3), EnvSpec(MontezumaRevengeDeterministic-v0), EnvSpec(MontezumaRevengeDeterministic-v3), EnvSpec(MontezumaRevengeNoFrameskip-v0), EnvSpec(MontezumaRevengeNoFrameskip-v3), EnvSpec(MontezumaRevenge-ram-v0), EnvSpec(MontezumaRevenge-ram-v3), EnvSpec(MontezumaRevenge-ramDeterministic-v0), EnvSpec(MontezumaRevenge-ramDeterministic-v3), EnvSpec(MontezumaRevenge-ramNoFrameskip-v0), EnvSpec(MontezumaRevenge-ramNoFrameskip-v3), EnvSpec(MsPacman-v0), EnvSpec(MsPacman-v3), EnvSpec(MsPacmanDeterministic-v0), EnvSpec(MsPacmanDeterministic-v3), EnvSpec(MsPacmanNoFrameskip-v0), EnvSpec(MsPacmanNoFrameskip-v3), EnvSpec(MsPacman-ram-v0), EnvSpec(MsPacman-ram-v3), EnvSpec(MsPacman-ramDeterministic-v0), EnvSpec(MsPacman-ramDeterministic-v3), EnvSpec(MsPacman-ramNoFrameskip-v0), EnvSpec(MsPacman-ramNoFrameskip-v3), EnvSpec(NameThisGame-v0), EnvSpec(NameThisGame-v3), EnvSpec(NameThisGameDeterministic-v0), EnvSpec(NameThisGameDeterministic-v3), EnvSpec(NameThisGameNoFrameskip-v0), EnvSpec(NameThisGameNoFrameskip-v3), EnvSpec(NameThisGame-ram-v0), EnvSpec(NameThisGame-ram-v3), EnvSpec(NameThisGame-ramDeterministic-v0), EnvSpec(NameThisGame-ramDeterministic-v3), EnvSpec(NameThisGame-ramNoFrameskip-v0), EnvSpec(NameThisGame-ramNoFrameskip-v3), EnvSpec(Phoenix-v0), EnvSpec(Phoenix-v3), EnvSpec(PhoenixDeterministic-v0), EnvSpec(PhoenixDeterministic-v3), EnvSpec(PhoenixNoFrameskip-v0), EnvSpec(PhoenixNoFrameskip-v3), EnvSpec(Phoenix-ram-v0), EnvSpec(Phoenix-ram-v3), EnvSpec(Phoenix-ramDeterministic-v0), EnvSpec(Phoenix-ramDeterministic-v3), EnvSpec(Phoenix-ramNoFrameskip-v0), EnvSpec(Phoenix-ramNoFrameskip-v3), EnvSpec(Pitfall-v0), EnvSpec(Pitfall-v3), EnvSpec(PitfallDeterministic-v0), EnvSpec(PitfallDeterministic-v3), EnvSpec(PitfallNoFrameskip-v0), EnvSpec(PitfallNoFrameskip-v3), EnvSpec(Pitfall-ram-v0), EnvSpec(Pitfall-ram-v3), EnvSpec(Pitfall-ramDeterministic-v0), EnvSpec(Pitfall-ramDeterministic-v3), EnvSpec(Pitfall-ramNoFrameskip-v0), EnvSpec(Pitfall-ramNoFrameskip-v3), EnvSpec(Pong-v0), EnvSpec(Pong-v3), EnvSpec(PongDeterministic-v0), EnvSpec(PongDeterministic-v3), EnvSpec(PongNoFrameskip-v0), EnvSpec(PongNoFrameskip-v3), EnvSpec(Pong-ram-v0), EnvSpec(Pong-ram-v3), EnvSpec(Pong-ramDeterministic-v0), EnvSpec(Pong-ramDeterministic-v3), EnvSpec(Pong-ramNoFrameskip-v0), EnvSpec(Pong-ramNoFrameskip-v3), EnvSpec(Pooyan-v0), EnvSpec(Pooyan-v3), EnvSpec(PooyanDeterministic-v0), EnvSpec(PooyanDeterministic-v3), EnvSpec(PooyanNoFrameskip-v0), EnvSpec(PooyanNoFrameskip-v3), EnvSpec(Pooyan-ram-v0), EnvSpec(Pooyan-ram-v3), EnvSpec(Pooyan-ramDeterministic-v0), EnvSpec(Pooyan-ramDeterministic-v3), EnvSpec(Pooyan-ramNoFrameskip-v0), EnvSpec(Pooyan-ramNoFrameskip-v3), EnvSpec(PrivateEye-v0), EnvSpec(PrivateEye-v3), EnvSpec(PrivateEyeDeterministic-v0), EnvSpec(PrivateEyeDeterministic-v3), EnvSpec(PrivateEyeNoFrameskip-v0), EnvSpec(PrivateEyeNoFrameskip-v3), EnvSpec(PrivateEye-ram-v0), EnvSpec(PrivateEye-ram-v3), EnvSpec(PrivateEye-ramDeterministic-v0), EnvSpec(PrivateEye-ramDeterministic-v3), EnvSpec(PrivateEye-ramNoFrameskip-v0), EnvSpec(PrivateEye-ramNoFrameskip-v3), EnvSpec(Qbert-v0), EnvSpec(Qbert-v3), EnvSpec(QbertDeterministic-v0), EnvSpec(QbertDeterministic-v3), EnvSpec(QbertNoFrameskip-v0), EnvSpec(QbertNoFrameskip-v3), EnvSpec(Qbert-ram-v0), EnvSpec(Qbert-ram-v3), EnvSpec(Qbert-ramDeterministic-v0), EnvSpec(Qbert-ramDeterministic-v3), EnvSpec(Qbert-ramNoFrameskip-v0), EnvSpec(Qbert-ramNoFrameskip-v3), EnvSpec(Riverraid-v0), EnvSpec(Riverraid-v3), EnvSpec(RiverraidDeterministic-v0), EnvSpec(RiverraidDeterministic-v3), EnvSpec(RiverraidNoFrameskip-v0), EnvSpec(RiverraidNoFrameskip-v3), EnvSpec(Riverraid-ram-v0), EnvSpec(Riverraid-ram-v3), EnvSpec(Riverraid-ramDeterministic-v0), EnvSpec(Riverraid-ramDeterministic-v3), EnvSpec(Riverraid-ramNoFrameskip-v0), EnvSpec(Riverraid-ramNoFrameskip-v3), EnvSpec(RoadRunner-v0), EnvSpec(RoadRunner-v3), EnvSpec(RoadRunnerDeterministic-v0), EnvSpec(RoadRunnerDeterministic-v3), EnvSpec(RoadRunnerNoFrameskip-v0), EnvSpec(RoadRunnerNoFrameskip-v3), EnvSpec(RoadRunner-ram-v0), EnvSpec(RoadRunner-ram-v3), EnvSpec(RoadRunner-ramDeterministic-v0), EnvSpec(RoadRunner-ramDeterministic-v3), EnvSpec(RoadRunner-ramNoFrameskip-v0), EnvSpec(RoadRunner-ramNoFrameskip-v3), EnvSpec(Robotank-v0), EnvSpec(Robotank-v3), EnvSpec(RobotankDeterministic-v0), EnvSpec(RobotankDeterministic-v3), EnvSpec(RobotankNoFrameskip-v0), EnvSpec(RobotankNoFrameskip-v3), EnvSpec(Robotank-ram-v0), EnvSpec(Robotank-ram-v3), EnvSpec(Robotank-ramDeterministic-v0), EnvSpec(Robotank-ramDeterministic-v3), EnvSpec(Robotank-ramNoFrameskip-v0), EnvSpec(Robotank-ramNoFrameskip-v3), EnvSpec(Seaquest-v0), EnvSpec(Seaquest-v3), EnvSpec(SeaquestDeterministic-v0), EnvSpec(SeaquestDeterministic-v3), EnvSpec(SeaquestNoFrameskip-v0), EnvSpec(SeaquestNoFrameskip-v3), EnvSpec(Seaquest-ram-v0), EnvSpec(Seaquest-ram-v3), EnvSpec(Seaquest-ramDeterministic-v0), EnvSpec(Seaquest-ramDeterministic-v3), EnvSpec(Seaquest-ramNoFrameskip-v0), EnvSpec(Seaquest-ramNoFrameskip-v3), EnvSpec(Skiing-v0), EnvSpec(Skiing-v3), EnvSpec(SkiingDeterministic-v0), EnvSpec(SkiingDeterministic-v3), EnvSpec(SkiingNoFrameskip-v0), EnvSpec(SkiingNoFrameskip-v3), EnvSpec(Skiing-ram-v0), EnvSpec(Skiing-ram-v3), EnvSpec(Skiing-ramDeterministic-v0), EnvSpec(Skiing-ramDeterministic-v3), EnvSpec(Skiing-ramNoFrameskip-v0), EnvSpec(Skiing-ramNoFrameskip-v3), EnvSpec(Solaris-v0), EnvSpec(Solaris-v3), EnvSpec(SolarisDeterministic-v0), EnvSpec(SolarisDeterministic-v3), EnvSpec(SolarisNoFrameskip-v0), EnvSpec(SolarisNoFrameskip-v3), EnvSpec(Solaris-ram-v0), EnvSpec(Solaris-ram-v3), EnvSpec(Solaris-ramDeterministic-v0), EnvSpec(Solaris-ramDeterministic-v3), EnvSpec(Solaris-ramNoFrameskip-v0), EnvSpec(Solaris-ramNoFrameskip-v3), EnvSpec(SpaceInvaders-v0), EnvSpec(SpaceInvaders-v3), EnvSpec(SpaceInvadersDeterministic-v0), EnvSpec(SpaceInvadersDeterministic-v3), EnvSpec(SpaceInvadersNoFrameskip-v0), EnvSpec(SpaceInvadersNoFrameskip-v3), EnvSpec(SpaceInvaders-ram-v0), EnvSpec(SpaceInvaders-ram-v3), EnvSpec(SpaceInvaders-ramDeterministic-v0), EnvSpec(SpaceInvaders-ramDeterministic-v3), EnvSpec(SpaceInvaders-ramNoFrameskip-v0), EnvSpec(SpaceInvaders-ramNoFrameskip-v3), EnvSpec(StarGunner-v0), EnvSpec(StarGunner-v3), EnvSpec(StarGunnerDeterministic-v0), EnvSpec(StarGunnerDeterministic-v3), EnvSpec(StarGunnerNoFrameskip-v0), EnvSpec(StarGunnerNoFrameskip-v3), EnvSpec(StarGunner-ram-v0), EnvSpec(StarGunner-ram-v3), EnvSpec(StarGunner-ramDeterministic-v0), EnvSpec(StarGunner-ramDeterministic-v3), EnvSpec(StarGunner-ramNoFrameskip-v0), EnvSpec(StarGunner-ramNoFrameskip-v3), EnvSpec(Tennis-v0), EnvSpec(Tennis-v3), EnvSpec(TennisDeterministic-v0), EnvSpec(TennisDeterministic-v3), EnvSpec(TennisNoFrameskip-v0), EnvSpec(TennisNoFrameskip-v3), EnvSpec(Tennis-ram-v0), EnvSpec(Tennis-ram-v3), EnvSpec(Tennis-ramDeterministic-v0), EnvSpec(Tennis-ramDeterministic-v3), EnvSpec(Tennis-ramNoFrameskip-v0), EnvSpec(Tennis-ramNoFrameskip-v3), EnvSpec(TimePilot-v0), EnvSpec(TimePilot-v3), EnvSpec(TimePilotDeterministic-v0), EnvSpec(TimePilotDeterministic-v3), EnvSpec(TimePilotNoFrameskip-v0), EnvSpec(TimePilotNoFrameskip-v3), EnvSpec(TimePilot-ram-v0), EnvSpec(TimePilot-ram-v3), EnvSpec(TimePilot-ramDeterministic-v0), EnvSpec(TimePilot-ramDeterministic-v3), EnvSpec(TimePilot-ramNoFrameskip-v0), EnvSpec(TimePilot-ramNoFrameskip-v3), EnvSpec(Tutankham-v0), EnvSpec(Tutankham-v3), EnvSpec(TutankhamDeterministic-v0), EnvSpec(TutankhamDeterministic-v3), EnvSpec(TutankhamNoFrameskip-v0), EnvSpec(TutankhamNoFrameskip-v3), EnvSpec(Tutankham-ram-v0), EnvSpec(Tutankham-ram-v3), EnvSpec(Tutankham-ramDeterministic-v0), EnvSpec(Tutankham-ramDeterministic-v3), EnvSpec(Tutankham-ramNoFrameskip-v0), EnvSpec(Tutankham-ramNoFrameskip-v3), EnvSpec(UpNDown-v0), EnvSpec(UpNDown-v3), EnvSpec(UpNDownDeterministic-v0), EnvSpec(UpNDownDeterministic-v3), EnvSpec(UpNDownNoFrameskip-v0), EnvSpec(UpNDownNoFrameskip-v3), EnvSpec(UpNDown-ram-v0), EnvSpec(UpNDown-ram-v3), EnvSpec(UpNDown-ramDeterministic-v0), EnvSpec(UpNDown-ramDeterministic-v3), EnvSpec(UpNDown-ramNoFrameskip-v0), EnvSpec(UpNDown-ramNoFrameskip-v3), EnvSpec(Venture-v0), EnvSpec(Venture-v3), EnvSpec(VentureDeterministic-v0), EnvSpec(VentureDeterministic-v3), EnvSpec(VentureNoFrameskip-v0), EnvSpec(VentureNoFrameskip-v3), EnvSpec(Venture-ram-v0), EnvSpec(Venture-ram-v3), EnvSpec(Venture-ramDeterministic-v0), EnvSpec(Venture-ramDeterministic-v3), EnvSpec(Venture-ramNoFrameskip-v0), EnvSpec(Venture-ramNoFrameskip-v3), EnvSpec(VideoPinball-v0), EnvSpec(VideoPinball-v3), EnvSpec(VideoPinballDeterministic-v0), EnvSpec(VideoPinballDeterministic-v3), EnvSpec(VideoPinballNoFrameskip-v0), EnvSpec(VideoPinballNoFrameskip-v3), EnvSpec(VideoPinball-ram-v0), EnvSpec(VideoPinball-ram-v3), EnvSpec(VideoPinball-ramDeterministic-v0), EnvSpec(VideoPinball-ramDeterministic-v3), EnvSpec(VideoPinball-ramNoFrameskip-v0), EnvSpec(VideoPinball-ramNoFrameskip-v3), EnvSpec(WizardOfWor-v0), EnvSpec(WizardOfWor-v3), EnvSpec(WizardOfWorDeterministic-v0), EnvSpec(WizardOfWorDeterministic-v3), EnvSpec(WizardOfWorNoFrameskip-v0), EnvSpec(WizardOfWorNoFrameskip-v3), EnvSpec(WizardOfWor-ram-v0), EnvSpec(WizardOfWor-ram-v3), EnvSpec(WizardOfWor-ramDeterministic-v0), EnvSpec(WizardOfWor-ramDeterministic-v3), EnvSpec(WizardOfWor-ramNoFrameskip-v0), EnvSpec(WizardOfWor-ramNoFrameskip-v3), EnvSpec(YarsRevenge-v0), EnvSpec(YarsRevenge-v3), EnvSpec(YarsRevengeDeterministic-v0), EnvSpec(YarsRevengeDeterministic-v3), EnvSpec(YarsRevengeNoFrameskip-v0), EnvSpec(YarsRevengeNoFrameskip-v3), EnvSpec(YarsRevenge-ram-v0), EnvSpec(YarsRevenge-ram-v3), EnvSpec(YarsRevenge-ramDeterministic-v0), EnvSpec(YarsRevenge-ramDeterministic-v3), EnvSpec(YarsRevenge-ramNoFrameskip-v0), EnvSpec(YarsRevenge-ramNoFrameskip-v3), EnvSpec(Zaxxon-v0), EnvSpec(Zaxxon-v3), EnvSpec(ZaxxonDeterministic-v0), EnvSpec(ZaxxonDeterministic-v3), EnvSpec(ZaxxonNoFrameskip-v0), EnvSpec(ZaxxonNoFrameskip-v3), EnvSpec(Zaxxon-ram-v0), EnvSpec(Zaxxon-ram-v3), EnvSpec(Zaxxon-ramDeterministic-v0), EnvSpec(Zaxxon-ramDeterministic-v3), EnvSpec(Zaxxon-ramNoFrameskip-v0), EnvSpec(Zaxxon-ramNoFrameskip-v3), EnvSpec(Go9x9-v0), EnvSpec(Go19x19-v0), EnvSpec(Hex9x9-v0), EnvSpec(OneRoundDeterministicReward-v0), EnvSpec(TwoRoundDeterministicReward-v0), EnvSpec(OneRoundNondeterministicReward-v0), EnvSpec(TwoRoundNondeterministicReward-v0), EnvSpec(ConvergenceControl-v0), EnvSpec(CNNClassifierTraining-v0), EnvSpec(PredictActionsCartpole-v0), EnvSpec(PredictObsCartpole-v0), EnvSpec(SemisuperPendulumNoise-v0), EnvSpec(SemisuperPendulumRandom-v0), EnvSpec(SemisuperPendulumDecay-v0), EnvSpec(OffSwitchCartpole-v0), EnvSpec(OffSwitchCartpoleProb-v0)]) ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output [2018-07-21 16:17:10,748] Making new env: CliffWalking-v0 ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /home/yingweiy/anaconda3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /home/yingweiy/anaconda3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /home/yingweiy/anaconda3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /home/yingweiy/anaconda3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state # S <- S' action = next_action # A <- A' if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) V_sarsa ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 # value of next state target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Q-Learning - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): learning rate gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) policy_s = np.ones(nA) * eps / nA # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step target = reward + (gamma * Qsa_next) # construct target new_value = current + (alpha * (target - current)) # get updated value return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Expected SARSA - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): step-size parameters for the update step gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 0.005 # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score # update Q Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def update_Q_sarsa(Q, state, action, reward, alpha, gamma, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" cur_est = Q[state][action] # select Q estimation value for current state-action Q_next = Q[next_state][next_action] if next_state is not None else 0 # Q value on the next state-action alt_est = reward + gamma * Q_next return cur_est + alpha * (alt_est - cur_est) def egre_policy(Q_s, nA, epsilon): """ return action according to e greedy policy""" if random.random() > epsilon: return np.argmax(Q_s) else: return random.choice(np.arange(nA)) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100, eps_start=1.0): nA = env.action_space.n # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initalize epsilon # epsilon = eps_start # initialize performance monitor tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 # init score to accumulate reward state = env.reset() epsilon = eps_start / i_episode # epsilon = max(epsilon*eps_decay, eps_min) # such method doesn't work # select action by egreedy policy action = egre_policy(Q[state], nA, epsilon) while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = egre_policy(Q[next_state], nA, epsilon) Q[state][action] = update_Q_sarsa(Q, state, action, reward, alpha, \ gamma, next_state=next_state, next_action=next_action) action = next_action state = next_state if done: Q[state][action] = update_Q_sarsa(Q, state, action, reward, alpha, gamma) tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 2500, .025) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 2500/2500 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(Q, state, action, reward, alpha, gamma, next_state=None): """Returns updated Q-value for the most recent experience.""" cur_est = Q[state][action] # select Q estimation value for current state-action Q_next = np.max(Q[next_state]) if next_state is not None else 0 # select max Q value of the next state alt_est = reward + gamma * Q_next return cur_est + alpha * (alt_est - cur_est) def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, plot_every=100): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # monitor perfomance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # print("episode {}".format(i_episode)) # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 # init score for accumulating reward epsilon = 1 / i_episode state = env.reset() while True: action = egre_policy(Q[state], nA, epsilon) # action = epsilon_greedy(Q, state, nA, epsilon) next_state, reward, done, info = env.step(action) score += reward # Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ # state, action, reward, next_state) Q[state][action] = update_Q_sarsamax(Q, state, action, reward, alpha, \ gamma, next_state) state = next_state if done: tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 300, .99) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 300/300 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(Q, state, action, reward, alpha, gamma, eps, nA, next_state=None): """Returns updated Q-value for the most recent experience.""" cur_est = Q[state][action] # select Q estimation value for current state-action policy_next = np.ones(nA) * eps / nA # set probability of all actions to epsilon / nA # update prob of action with max Q to 1 - esp + esp / nA policy_next[np.argmax(Q[next_state])] = 1 - eps + eps / nA Q_next = np.dot(Q[next_state], policy_next) alt_est = reward + gamma * Q_next return cur_est + alpha * (alt_est - cur_est) def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # monitor perfomance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # print("episode {}".format(i_episode)) # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 # init score for accumulating reward # eps = 1 / i_episode eps = 0.007 state = env.reset() while True: action = egre_policy(Q[state], nA, eps) # action = epsilon_greedy(Q, state, nA, epsilon) next_state, reward, done, info = env.step(action) score += reward # Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ # state, action, reward, next_state) Q[state][action] = update_Q_expsarsa(Q, state, action, reward, alpha, \ gamma, eps, nA, next_state) state = next_state if done: tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 2000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 2000/2000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /Users/rampi/anaconda/lib/python2.7/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /Users/rampi/anaconda/lib/python2.7/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /Users/rampi/anaconda/lib/python2.7/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /Users/rampi/anaconda/lib/python2.7/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(Q,state,nA,eps): #With prob eps explore, else exploit best_action = np.argmax(Q[state]) if np.random.rand() < eps: #explore return np.random.choice(range(nA)) else: #exploit return best_action def update_Q_sarsa(alpha,gamma,Q,state,action,reward,next_state=None,next_action=None): Q_nsa_value = 0.0 if next_state is not None: Q_nsa_value = Q[next_state][next_action] curr_value = Q[state][action] new_value = curr_value + alpha*(reward + gamma*Q_nsa_value - curr_value) return new_value def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes)) sys.stdout.flush() #Author: @Ram Prakash score = 0 state = env.reset() eps = 1.0/i_episode action = epsilon_greedy(Q,state,env.nA,eps) while True: next_state,reward,done,info = env.step(action) score = score+reward if not done: next_action = epsilon_greedy(Q,next_state,env.nA,eps) Q[state][action] = update_Q_sarsa(alpha,gamma,Q,state,action,reward,next_state,next_action) state = next_state action = next_action else: Q[state][action] = update_Q_sarsa(alpha,gamma,Q,state,action,reward) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 100/5000 Episode 200/5000 Episode 300/5000 Episode 400/5000 Episode 500/5000 Episode 600/5000 Episode 700/5000 Episode 800/5000 Episode 900/5000 Episode 1000/5000 Episode 1100/5000 Episode 1200/5000 Episode 1300/5000 Episode 1400/5000 Episode 1500/5000 Episode 1600/5000 Episode 1700/5000 Episode 1800/5000 Episode 1900/5000 Episode 2000/5000 Episode 2100/5000 Episode 2200/5000 Episode 2300/5000 Episode 2400/5000 Episode 2500/5000 Episode 2600/5000 Episode 2700/5000 Episode 2800/5000 Episode 2900/5000 Episode 3000/5000 Episode 3100/5000 Episode 3200/5000 Episode 3300/5000 Episode 3400/5000 Episode 3500/5000 Episode 3600/5000 Episode 3700/5000 Episode 3800/5000 Episode 3900/5000 Episode 4000/5000 Episode 4100/5000 Episode 4200/5000 Episode 4300/5000 Episode 4400/5000 Episode 4500/5000 Episode 4600/5000 Episode 4700/5000 Episode 4800/5000 Episode 4900/5000 Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import random import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_epsilon_policy(bj_env, epsilon, Q, state, nA): policy = np.ones(nA) * epsilon / nA astar = np.argmax(Q[state]) policy[astar] = 1 - epsilon + (epsilon / nA) action = np.random.choice(nA, p=policy) if state in Q else bj_env.action_space.sample() return action def get_greedy_policy(bj_env, Q, state): return np.argmax(Q[state]) def get_expected_value(bj_env, epsilon, Q, state, nA): policy = np.ones(nA) * epsilon / nA astar = np.argmax(Q[state]) policy[astar] = 1 - epsilon + (epsilon / nA) V = np.dot(Q[state],policy) return V def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() state = env.reset() epsilon = 1 / i_episode action = get_epsilon_policy(env, epsilon, Q, state, nA) while True: next_state, reward, done, info = env.step(action) next_action = get_epsilon_policy(env, epsilon, Q, next_state, nA) if done: Q[state][action] += alpha * (reward - Q[state][action]) break else: Q[state][action] += alpha * (reward + gamma * Q[next_state][next_action] - Q[state][action]) state = next_state action = next_action return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) nA = env.nA # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() state = env.reset() epsilon = 1 / i_episode while True: action = get_epsilon_policy(env, epsilon, Q, state, nA) next_state, reward, done, info = env.step(action) if done: Q[state][action] += alpha * (reward - Q[state][action]) break else: greedy_action = get_greedy_policy(env, Q, next_state) Q[state][action] += alpha * (reward + gamma * Q[next_state][greedy_action] - Q[state][action]) state = next_state return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() state = env.reset() epsilon = 0.005 while True: action = get_epsilon_policy(env, epsilon, Q, state, env.nA) next_state, reward, done, info = env.step(action) if done: Q[state][action] += alpha * (reward - Q[state][action]) break else: expected_value = get_expected_value(env, epsilon, Q, next_state, env.nA) Q[state][action] += alpha * (reward + gamma * expected_value - Q[state][action]) state = next_state return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') env.step(2) ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /home/webwerks/anaconda3/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " /home/webwerks/anaconda3/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " /home/webwerks/anaconda3/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " /home/webwerks/anaconda3/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_episode_from_Q(env, Q, epsilon, nA, alpha, gamma): """ generates an episode from following the epsilon-greedy policy """ state = env.reset() action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() while True: next_state, reward, done, info = env.step(action) next_action = np.random.choice(np.arange(nA), p=get_probs(Q[next_state], epsilon, nA)) \ if next_state in Q else env.action_space.sample() Q[state][action] += alpha * (reward + (gamma * Q[next_state][next_action]) - Q[state][action]) state, action = next_state, next_action if done: break return Q def get_probs(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # set the value of epsilon eps = 1.0 / i_episode Q = update_episode_from_Q(env, Q, eps, nA, alpha, gamma) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_episode_from_Q_q_learning(env, Q, epsilon, nA, alpha, gamma): """ generates an episode from following the epsilon-greedy policy """ state = env.reset() action = np.random.choice(np.arange(nA), p=get_probs_q_learning(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() while True: next_state, reward, done, info = env.step(action) next_action = np.random.choice(np.arange(nA), p=get_probs_q_learning(Q[next_state], epsilon, nA)) \ if next_state in Q else env.action_space.sample() Q[state][action] += alpha * (reward + (gamma * np.max(Q[next_state])) - Q[state][action]) state, action = next_state, next_action if done: break return Q def get_probs_q_learning(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # set the value of epsilon eps = 1.0 / i_episode Q = update_episode_from_Q_q_learning(env, Q, eps, nA, alpha, gamma) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_episode_from_Q_expected_sarsa(env, Q, epsilon, nA, alpha, gamma): """ generates an episode from following the epsilon-greedy policy """ state = env.reset() action = np.random.choice(np.arange(nA), p=get_probs_q_learning(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() while True: next_state, reward, done, info = env.step(action) policy = get_probs_q_learning(Q[next_state], epsilon, nA) next_action = np.random.choice(np.arange(nA), p= policy)\ if next_state in Q else env.action_space.sample() Q[state][action] += alpha * (reward + (gamma * np.dot(Q[next_state], policy)) - Q[state][action]) state, action = next_state, next_action if done: break return Q def get_probs_expected_sarsa(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s def expected_sarsa(env, num_episodes, alpha, gamma=1.0): nA = env.nA # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes eps = 0.005 # set value of epsilon for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # set the value of epsilon Q = update_episode_from_Q_expected_sarsa(env, Q, eps, nA, alpha, gamma) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_action(Qs, nA, eps): probs = np.ones(nA).astype(np.float32) * eps / nA probs[np.argmax(Qs)] += 1 - eps return np.random.choice(np.arange(nA), p=probs) def get_greedy_action(Qs, nA, eps): return np.argmax(Qs) def cliff_episode(env, Q, eps): episode = [] s = env.reset() while True: action = get_action(Q[s], env.nA, eps) next_state, reward, done, info = env.step(action) episode.append([s, action, reward]) s = next_state if done: break return episode def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_min=0.3, eps_decay=0.999): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes eps = eps_start for i_episode in range(1, num_episodes+1): # monitor progress # eps = max(eps_min, eps*eps_decay) eps = 1.0 / i_episode if i_episode % 100 == 0: print("\rEpisode {}/{}, eps {}".format(i_episode, num_episodes,eps), end="") sys.stdout.flush() state = env.reset() action = get_action(Q[state], env.nA, eps) while True: next_state, reward, done, info = env.step(action) next_action = get_action(Q[next_state], env.nA, eps) Q[state][action] = Q[state][action] + alpha*(reward + gamma*Q[next_state][next_action] - Q[state][action]) state = next_state action = next_action if done: break ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000, eps 0.00020408163265306123 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_action(Qs, nA, eps): probs = np.ones(nA).astype(np.float32) * eps / nA probs[np.argmax(Qs)] += 1 - eps return np.random.choice(np.arange(nA), p=probs) def get_greedy_action(Qs, nA, eps): return np.argmax(Qs) def cliff_episode(env, Q, eps): episode = [] s = env.reset() while True: action = get_action(Q[s], env.nA, eps) next_state, reward, done, info = env.step(action) episode.append([s, action, reward]) s = next_state if done: break return episode def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_min=0.3, eps_decay=0.999): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA) * 1000) # initialize performance monitor # loop over episodes eps = eps_start for i_episode in range(1, num_episodes+1): # monitor progress # eps = max(eps_min, eps*eps_decay) eps = 1.0 / i_episode if i_episode % 100 == 0: print("\rEpisode {}/{}, eps {}".format(i_episode, num_episodes,eps), end="") sys.stdout.flush() state = env.reset() while True: action = get_action(Q[state], env.nA, eps) next_state, reward, done, info = env.step(action) Q[state][action] = Q[state][action] + alpha*(reward + gamma*Q[next_state][get_greedy_action(Q[next_state], env.nA, eps)] - Q[state][action]) state = next_state if done: break ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000, eps 0.00020408163265306123 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_action(Qs, nA, eps): probs = np.ones(nA).astype(np.float32) * eps / nA probs[np.argmax(Qs)] += 1 - eps return np.random.choice(np.arange(nA), p=probs) def get_greedy_action(Qs, nA, eps): return np.argmax(Qs) def cliff_episode(env, Q, eps): episode = [] s = env.reset() while True: action = get_action(Q[s], env.nA, eps) next_state, reward, done, info = env.step(action) episode.append([s, action, reward]) s = next_state if done: break return episode def expected_sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_min=0.3, eps_decay=0.999): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.ones(env.nA) * 1000) # initialize performance monitor # loop over episodes eps = eps_start for i_episode in range(1, num_episodes+1): # monitor progress # eps = max(eps_min, eps*eps_decay) # eps = 1.0 / i_episode eps = 0.005 if i_episode % 100 == 0: print("\rEpisode {}/{}, eps {}".format(i_episode, num_episodes,eps), end="") sys.stdout.flush() state = env.reset() while True: action = get_action(Q[state], env.nA, eps) next_state, reward, done, info = env.step(action) probs = np.ones(env.nA) * eps/ env.nA probs[np.argmax(Q[next_state])] += 1 - eps expected_Q = np.sum(Q[next_state] * probs) Q[state][action] = Q[state][action] + alpha*(reward + gamma*expected_Q - Q[state][action]) state = next_state if done: break ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000, eps 0.005 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output _____no_output_____ ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random import math from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0] = -np.arange(3, 15)[::-1] V_opt[1] = -np.arange(3, 15)[::-1] + 1 V_opt[2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa_udacity(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state # S <- S' action = next_action # A <- A' if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q def get_probs(Q_s, eps, nA): probs = np.ones(nA)*eps/nA probs[np.argmax(Q_s)] = 1 - eps + eps/nA return probs def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start = 1.0, eps_decay = 0.9999, eps_end = 0.01, plot_every=100): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) ## monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = 1/i_episode#max(eps_start*eps_decay**i_episode, eps_end) score = 0 state = env.reset() action = epsilon_greedy(Q, state, env.nA, eps) #action = np.random.choice(np.arange(env.nA), p=get_probs(Q[state], eps, env.nA)) \ #if state in Q else env.action_space.sample() while True: next_state, next_reward, done, info = env.step(action) score += next_reward if not done: next_action = epsilon_greedy(Q, next_state, env.nA, eps) #next_action = np.random.choice(np.arange(env.nA), p=get_probs(Q[next_state], eps, env.nA)) \ #if next_state in Q else env.action_space.sample() Q[state][action] = Q[state][action] + alpha*(next_reward + gamma*(Q[next_state][next_action] if next_state in Q else 0) - Q[state][action]) action = next_action state = next_state if done: Q[state][action] = Q[state][action] + alpha*(next_reward - Q[state][action]) tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 10000, .01, eps_decay = 0.9999, eps_start=1, eps_end=0.01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 10000, .01, eps_decay = 0.999, eps_start=1, eps_end=0.1) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa_udacity(env, 5000, 0.01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Implementation details:- α - small alpha ~ 0.01 - 0.05- ϵ - [0.01-0.5); 1] and eps_decay = 0.999 (0.9999); but seems not enough for V be equal to V_opt at non-optimal states --- the less the better- num_episodes = 10000OR - α - small alpha ~ 0.01 <- 0.05- ϵ - 1/i- num_episodes = 5000 Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start = 1.0, eps_decay = 0.9999, eps_end = 0.01, plot_every=100): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) ## monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = 1/i_episode#max(eps_start*eps_decay**i_episode, eps_end) score = 0 state = env.reset() #action = np.random.choice(np.arange(env.nA), p=get_probs(Q[state], eps, env.nA)) \ #if state in Q else env.action_space.sample() while True: action = epsilon_greedy(Q, state, env.nA, eps) next_state, next_reward, done, info = env.step(action) score += next_reward if not done: #next_action = epsilon_greedy(Q, next_state, env.nA, eps) #next_action = np.random.choice(np.arange(env.nA), p=get_probs(Q[next_state], eps, env.nA)) \ #if next_state in Q else env.action_space.sample() Q[state][action] = Q[state][action] + alpha*(next_reward + gamma*(np.max(Q[next_state]) if next_state in Q else 0) - Q[state][action]) state = next_state if done: Q[state][action] = Q[state][action] + alpha*(next_reward - Q[state][action]) tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Implementation details:- α - small alpha ~ 0.01 - 0.05- ϵ - 1/i is good, but seems not enough for V be equal to V_opt at non-optimal states Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) policy_s = np.ones(nA) * eps / nA # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step target = reward + (gamma * Qsa_next) # construct target new_value = current + (alpha * (target - current)) # get updated value return new_value def expected_sarsa_udacity(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Expected SARSA - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): step-size parameters for the update step gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 0.005 # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score # update Q Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q def expected_sarsa(env, num_episodes, alpha, gamma=1.0, eps_start = 1.0, eps_decay = 0.9999, eps_end = 0.01, plot_every=100): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) ## monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = max(eps_start*eps_decay**i_episode, eps_end) score = 0 state = env.reset() #action = np.random.choice(np.arange(env.nA), p=get_probs(Q[state], eps, env.nA)) \ #if state in Q else env.action_space.sample() while True: action = epsilon_greedy(Q, state, env.nA, eps) next_state, next_reward, done, info = env.step(action) score += next_reward if not done: #next_action = epsilon_greedy(Q, next_state, env.nA, eps) #next_action = np.random.choice(np.arange(env.nA), p=get_probs(Q[next_state], eps, env.nA)) \ #if next_state in Q else env.action_space.sample() probs = np.ones(env.nA)*eps/env.nA if next_state in Q: probs[np.argmax(Q[next_state])] = 1 - eps + eps/env.nA Q[state][action] = Q[state][action] + alpha*(next_reward + gamma*(np.sum(probs*Q[next_state])) - Q[state][action]) state = next_state if done: Q[state][action] = Q[state][action] + alpha*(next_reward - Q[state][action]) tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000,0.01, eps_start=0.005, eps_decay=1, eps_end=0.005) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa_udacity(env, 5000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code #Unnecessary code def generate_episode_from_epsilon(env, epsilon, Q): episode = [] state = env.reset() nA = env.action_space.n epsilon_frac = epsilon / float(nA) while True: max_ind = np.argmax(Q[state]) probs = [] for it in range(nA): if it == max_ind: probs.append(1.0 - epsilon + epsilon_frac) else: probs.append(epsilon_frac) action = np.random.choice(np.arange(nA), p=probs) next_state, reward, done, info = env.step(action) episode.append((state, action, reward)) state = next_state if done: print(probs) print("Max ind {}".format(max_ind)) break return episode def get_action_epsilon_greedy(Q, state, epsilon): nA = env.action_space.n if np.random.random() > epsilon: return np.argmax(Q[state]) else: return np.random.choice(np.arange(nA)) def get_sarsa(Q, state, action, epsilon): next_state, reward, done, info = env.step(action) next_action = get_action_epsilon_greedy(Q, next_state, epsilon) return state, action, reward, next_state, next_action, done def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): #episode = [] state = env.reset() # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # TODO: complete the function epsilon = 1.0 / i_episode #get first action action = get_action_epsilon_greedy(Q, state, epsilon) while True: s, a, r, ns, na, done = get_sarsa(Q, state, action, epsilon) #episode.append((s,a,r,ns,na)) q_cur = Q[s][a] q_next = Q[ns][na] Q[s][a] = q_cur + alpha * (r + gamma*q_next - q_cur) state = ns action = na if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_sarsamax(Q, state, action, epsilon): next_state, reward, done, info = env.step(action) next_action = np.argmax(Q[next_state]) return state, action, reward, next_state, next_action, done def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): #episode = [] state = env.reset() # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # TODO: complete the function epsilon = 1.0 / i_episode #get first action action = get_action_epsilon_greedy(Q, state, epsilon) while True: s, a, r, ns, na, done = get_sarsamax(Q, state, action, epsilon) #episode.append((s,a,r,ns,na)) q_cur = Q[s][a] q_next = Q[ns][na] Q[s][a] = q_cur + alpha * (r + gamma*q_next - q_cur) state = ns action = get_action_epsilon_greedy(Q, state, epsilon) if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_sars(Q, state, action, epsilon): next_state, reward, done, info = env.step(action) return state, action, reward, next_state, done def get_probs(Q, state, epsilon): nA = len(Q[state]) probs = np.ones(nA) * epsilon / nA max_ind = np.argmax(Q[state]) probs[max_ind] += (1.0 - epsilon) return probs def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): #episode = [] state = env.reset() # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # TODO: complete the function epsilon = 0.005 #get first action action = get_action_epsilon_greedy(Q, state, epsilon) while True: s, a, r, ns, done = get_sars(Q, state, action, epsilon) q_cur = Q[s][a] q_expected = np.dot(get_probs(Q, ns, epsilon),Q[ns]) Q[s][a] = q_cur + alpha * (r + gamma*q_expected - q_cur) state = ns action = get_action_epsilon_greedy(Q, state, epsilon) if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy_probs(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA #best action best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s def greey_probs(Q_s, epsilon, nA): policy_s = np.ones(nA) #best action best_a = np.argmax(Q_s) policy_s[best_a] = 1 return policy_s def generate_episode_from_Q(env, Q, epsilon, nA): """ generates an episode from following the epsilon-greedy policy """ episode = [] state = env.reset() while True: #This will create a random array with nA elemenst and dimension 1 x size action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() next_state, reward, done, info = env.step(action) episode.append((state, action, reward)) state = next_state if done: break return episode def update_Q(env, episode, Q, alpha, gamma): """ updates the action-value function estimate using the most recent episode """ states, actions, rewards = zip(*episode) # prepare for discounting discounts = np.array([gamma**i for i in range(len(rewards)+1)]) for i, state in enumerate(states): old_Q = Q[state][actions[i]] Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q) return Q def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step if next_state is not None: Qsa_next = Q[next_state][next_action] else: Qsa_next = 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def sarsa(env, num_episodes, alpha, gamma=1.0,eps_start=1.0, eps_decay=.99999, eps_min=0.05): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor epsilon = eps_start # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = max(epsilon*eps_decay, eps_min) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def take_action(state, q_table, epsilon): action = np.random.randint(0, env.action_space.n) if (state in q_table) and np.random.random() > epsilon: action = np.argmax(q_table[state]) return action def sarsa(env, num_episodes, alpha, gamma=1.0, epsilon_min = 0.1): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.action_space.n)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 1.0/i_episode state = env.reset() past_action = None past_state = None done = False action = take_action(state, Q, epsilon) while not done: past_state = state state, reward, done, info = env.step(action) past_action = action action = take_action(state, Q, epsilon) Q[past_state][past_action] += alpha * (reward + gamma * Q[state][action] - Q[past_state][past_action]) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 1.0/i_episode state = env.reset() past_action = None past_state = None done = False action = take_action(state, Q, epsilon) while not done: past_state = state state, reward, done, info = env.step(action) past_action = action action = take_action(state, Q, epsilon) Q[past_state][past_action] += alpha * (reward + gamma * max(Q[state]) - Q[past_state][past_action]) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 1.0/i_episode state = env.reset() past_action = None past_state = None done = False action = take_action(state, Q, epsilon) while not done: past_state = state state, reward, done, info = env.step(action) past_action = action action = take_action(state, Q, epsilon) expected_q = 0 best_action = np.argmax(Q[state]) for i in range(env.action_space.n): if best_action != i: expected_q += Q[state][i] * epsilon / env.action_space.n else: expected_q += Q[state][i] * (1 - epsilon + epsilon / env.action_space.n) Q[past_state][past_action] += alpha * (reward + gamma * expected_q - Q[past_state][past_action]) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0] = -np.arange(3, 15)[::-1] V_opt[1] = -np.arange(3, 15)[::-1] + 1 V_opt[2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) pol_opt = np.hstack((np.ones(11), 2, 0)) print(pol_opt) ###Output [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 2. 0.] ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def eps_greedy_policy(Q, epsilon, state, nA): if state in Q: best_action = np.argmax(Q[state]) if np.random.uniform()>epsilon: return best_action return np.random.choice(np.arange(nA)) def sarsa_update(Q, transition, alpha, gamma): state, action, reward, done, next_state, next_action = transition q_t = Q[state][action] q_tp1 = Q[next_state][next_action] td_target = reward + (gamma * (1-done) * q_tp1) Q[state][action] = q_t + alpha * (td_target-q_t) return Q[state][action] def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor ep_scores = deque(maxlen=100) avg_ep_scores = deque(maxlen = num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): ## TODO: complete the function eps = 1.0/i_episode score = 0 state = env.reset() action = eps_greedy_policy(Q, eps, state, nA) while True: next_state, reward, done, info = env.step(action) score += reward next_action = eps_greedy_policy(Q, eps, next_state, nA) transition = state, action, reward, done, next_state, next_action Q[state][action] = sarsa_update(Q, transition, alpha, gamma) if done: ep_scores.append(score) break state = next_state action = next_action # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() avg_ep_scores.append(np.mean(ep_scores)) plt.plot(np.linspace(0,num_episodes,len(avg_ep_scores),endpoint=False), np.asarray(avg_ep_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % 100) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % 100), np.max(avg_ep_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def qlearning_update(Q, transition, alpha, gamma): state, action, reward, done, next_state = transition q_t = Q[state][action] q_tp1 = np.max(Q[next_state]) td_target = reward + (gamma * (1-done) * q_tp1) Q[state][action] = q_t + alpha * (td_target-q_t) return Q[state][action] def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(nA)) ep_scores = deque(maxlen=100) avg_ep_scores = deque(maxlen = num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): ## TODO: complete the function eps = 1.0/i_episode score = 0 state = env.reset() while True: action = eps_greedy_policy(Q, eps, state, nA) next_state, reward, done, info = env.step(action) score += reward transition = state, action, reward, done, next_state Q[state][action] = qlearning_update(Q, transition, alpha, gamma) if done: ep_scores.append(score) break state = next_state # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() avg_ep_scores.append(np.mean(ep_scores)) plt.plot(np.linspace(0,num_episodes,len(avg_ep_scores),endpoint=False), np.asarray(avg_ep_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % 100) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % 100), np.max(avg_ep_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def exp_sarsa_update(Q, transition, eps, alpha, gamma): state, action, reward, done, next_state = transition nA = Q[state].shape[0] q_t = Q[state][action] best_q = np.max(Q[next_state]) exp_q = ((1-eps)*best_q) + ((eps/nA)*sum(Q[next_state])) td_target = reward + (gamma * (1-done) * exp_q) Q[state][action] = q_t + alpha * (td_target-q_t) return Q[state][action] def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(nA)) ep_scores = deque(maxlen=100) avg_ep_scores = deque(maxlen = num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): ## TODO: complete the function eps = 0.005 score = 0 state = env.reset() while True: action = eps_greedy_policy(Q, eps, state, nA) next_state, reward, done, info = env.step(action) score += reward transition = state, action, reward, done, next_state Q[state][action] = exp_sarsa_update(Q, transition, eps, alpha, gamma) if done: ep_scores.append(score) break state = next_state # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() avg_ep_scores.append(np.mean(ep_scores)) plt.plot(np.linspace(0,num_episodes,len(avg_ep_scores),endpoint=False), np.asarray(avg_ep_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % 100) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % 100), np.max(avg_ep_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import random import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() eps = 1. / i_episode action = epsilon_greedy(Q, state, env.nA, eps) while True: next_state, reward, done, info = env.step(action) score += reward if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) break else: next_action = epsilon_greedy(Q, next_state, env.nA, eps) # epsilon-greedy action Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state # S <- S' action = next_action return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 10000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 10000/10000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_max = 0 if next_state is not None: Qsa_max = max(Q[next_state]) target = reward + (gamma * Qsa_max) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() eps = 1. / i_episode while True: action = epsilon_greedy(Q, state, env.nA, eps) next_state, reward, done, info = env.step(action) score += reward if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) break else: next_action = epsilon_greedy(Q, next_state, env.nA, eps) # epsilon-greedy action Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ state, action, reward, next_state) state = next_state # S <- S' return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_max = 0 if next_state is not None: prob = np.ones(nA) * eps / nA # current policy (for next state S') prob[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) Qsa_max = np.dot(Q[next_state], prob) target = reward + (gamma * Qsa_max) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() eps = 0.005 while True: action = epsilon_greedy(Q, state, nA, eps) next_state, reward, done, info = env.step(action) score += reward if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) break else: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \ state, action, reward, next_state) state = next_state # S <- S' return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output _____no_output_____ ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from typing import List from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(Q: defaultdict, policy: defaultdict, epsilon: float, state: int = None) -> defaultdict: n_actions: int = env.action_space.n if state is None: for state in Q: for action in range(0, n_actions): policy[state][action] = 1 / n_actions else: index_max_arg: int = np.argmax(Q[state]) for action in range(0, n_actions): policy[state][action] = 1 / n_actions policy[state][index_max_arg] = 1 - epsilon + (epsilon / n_actions) return policy import random def choose_epsilon_greedy_action(Q: defaultdict, state: int, epsilon: float) -> int: if random.random() > epsilon: return np.argmax(Q[state]) else: n_actions: int = env.action_space.n return np.random.choice(np.arange(n_actions), p=np.full((n_actions), 1/n_actions)) def sarsa(env, num_episodes, alpha:float=0.05, gamma:float=1.0) -> defaultdict: # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon: float = 1/i_episode state = env.reset() action = choose_epsilon_greedy_action(Q, state, epsilon) while True: next_state, reward, is_done, info = env.step(action) current_estimate: float = Q[state][action] if not is_done: next_action = choose_epsilon_greedy_action( Q, next_state, epsilon) next_estimate: float = Q[next_state][next_action] target: float = reward + (gamma * next_estimate) Q[state][action] += alpha * (target - current_estimate) state = next_state action = next_action if is_done: target: float = reward + (gamma * 0.0) Q[state][action] += alpha * (target - current_estimate) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function #Q_sarsa = sarsa(env, 5000, .01) Q_sarsa = sarsa(env, num_episodes=5000, alpha=.01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon: float = 1/i_episode state = env.reset() while True: action = choose_epsilon_greedy_action(Q, state, epsilon) next_state, reward, is_done, info = env.step(action) current_estimate: float = Q[state][action] if not is_done: max_action: int = np.argmax(Q[state]) next_estimate: float = Q[next_state][max_action] target: float = reward + (gamma * next_estimate) Q[state][action] += alpha * (target - current_estimate) state = next_state action = max_action if is_done: target: float = reward + (gamma * 0.0) Q[state][action] += alpha * (target - current_estimate) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def compute_expected_value(Q: defaultdict, next_state: int, epsilon: float) -> float: n_actions: int = len(Q[next_state]) index_max_arg: int = np.argmax(Q[next_state]) policy = np.ones(n_actions) * epsilon / n_actions policy[index_max_arg] = 1 - epsilon + (epsilon / n_actions) return np.dot(Q[next_state], policy) def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) epsilon: float = 1.0 # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 0.005 #max(epsilon*0.99999, 0.005) state = env.reset() while True: action = choose_epsilon_greedy_action(Q, state, epsilon) next_state, reward, is_done, info = env.step(action) if not is_done: current_estimate: float = Q[state][action] target: float = reward + (gamma * compute_expected_value(Q, next_state, epsilon)) Q[state][action] += alpha * (target - current_estimate) state = next_state if is_done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def eps_greedy_action(epsilon, env, Q, state): rand = random.uniform(0, 1) if rand < epsilon: action = env.action_space.sample() else: try: action = np.argmax([Q[state][0],Q[state][1],Q[state][2],Q[state][3]]) except: action = env.action_space.sample() return action def sarsa(env, num_episodes, alpha, gamma=1, epsilon = 1): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) state = env.reset() # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() done = False state = env.reset() #epsilon = 1.0 / i_episode epsilon = epsilon - 1/float(num_episodes) while not done: action = eps_greedy_action(epsilon, env, Q, state) next_state, next_reward, done, info = env.step(action) next_action = eps_greedy_action(epsilon, env, Q, next_state) if not done: Q[state][action] = Q[state][action] + alpha * (next_reward + gamma * Q[next_state][next_action] - Q[state][action]) else: Q[state][action] = Q[state][action] + alpha * (next_reward + gamma * 0 - Q[state][action]) action = next_action state = next_state #epsilon = epsilon - 1/float(num_episodes) #print(epsilon) ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 200/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def eps_greedy_action(epsilon, env, Q, state): rand = random.uniform(0, 1) if rand < epsilon: action = env.action_space.sample() else: try: action = np.argmax([Q[state][0],Q[state][1],Q[state][2],Q[state][3]]) except: action = env.action_space.sample() return action def q_learning(env, num_episodes, alpha, gamma=1.0, epsilon = 1): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) state = env.reset() # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() done = False state = env.reset() #epsilon = 1.0 / i_episode epsilon = epsilon - 1/float(num_episodes) while not done: action = eps_greedy_action(epsilon, env, Q, state) next_state, next_reward, done, info = env.step(action) #next_action = eps_greedy_action(epsilon, env, Q, next_state) if not done: Q[state][action] = Q[state][action] + alpha * (next_reward + gamma * np.max(Q[next_state]) - Q[state][action]) #else: # Q[state][action] = Q[state][action] + alpha * (next_reward + gamma * 0 - Q[state][action]) #action = next_action state = next_state #epsilon = epsilon - 1/float(num_episodes) #print(epsilon) ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def eps_greedy_action(epsilon, env, Q, state): rand = random.uniform(0, 1) if rand < epsilon: action = env.action_space.sample() else: try: action = np.argmax([Q[state][0],Q[state][1],Q[state][2],Q[state][3]]) except: action = env.action_space.sample() return action def expected_sarsa(env, num_episodes, alpha, gamma=1.0, epsilon = 1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) state = env.reset() # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() done = False state = env.reset() epsilon = 1.0 / i_episode #epsilon = epsilon - 1/float(num_episodes) while not done: action = eps_greedy_action(epsilon, env, Q, state) next_state, next_reward, done, info = env.step(action) #next_action = eps_greedy_action(epsilon, env, Q, next_state) if not done: Q[state][action] = Q[state][action] + alpha * (next_reward + gamma * np.mean(Q[next_state]) - Q[state][action]) #else: # Q[state][action] = Q[state][action] + alpha * (next_reward + gamma * 0 - Q[state][action]) #action = next_action state = next_state #epsilon = epsilon - 1/float(num_episodes) #print(epsilon) ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 50000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 50000/50000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys # !{sys.executable} -m pip install seaborn import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def update_Q(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): # """ updates the action-value function estimate using the most recent episode """ # old_Q = Q[state][action] # Q_next = Q[next_state][next_action] if next_state is not None else 0 # Q[state][action] = old_Q + (alpha*((reward + (gamma*Q_next)) - old_Q)) # return Q[state][action] """ updates the action-value function estimate using the most recent episode """ old_Q = Q[state][action] Q_next = Q[next_state][next_action] if next_state is not None else 0 Q[state][action] = old_Q + (alpha*((reward + (gamma*Q_next)) - old_Q)) return Q[state][action] def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state # S <- S' action = next_action # A <- A' if done: Q[state][action] = update_Q(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q(alpha, gamma, Q, state, action, reward, next_state=None): """ updates the action-value function estimate using the most recent episode """ old_Q = Q[state][action] Q_next = max(Q[next_state]) if next_state is not None else 0 # Q_next = 0 # for next_action1 in Q[next_state]: # Q_next = max(Q_next, Q[next_state][next_action1]) Q[state][action] = old_Q + (alpha*((reward + (gamma*Q_next)) - old_Q)) return Q[state][action] def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 # initialize score state = env.reset() # start episode nA = env.action_space.n eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q(alpha, gamma, Q, \ state, action, reward, next_state) state = next_state # S <- S' action = next_action # A <- A' if done: Q[state][action] = update_Q(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01, 100) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q(alpha, gamma, Q, state, action, reward, next_state=None): """ updates the action-value function estimate using the most recent episode """ old_Q = Q[state][action] Q_next_len = len(Q[next_state]) Q_next = max(Q[next_state]) if next_state is not None else 0 # Q_next = 0 # for next_action1 in Q[next_state]: # Q_next = max(Q_next, Q[next_state][next_action1]) Q_next = Q_next/Q_next_len Q[state][action] = old_Q + (alpha*((reward + (gamma*Q_next)) - old_Q)) return Q[state][action] def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 # initialize score state = env.reset() # start episode nA = env.action_space.n eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q(alpha, gamma, Q, \ state, action, reward, next_state) state = next_state # S <- S' action = next_action # A <- A' if done: Q[state][action] = update_Q(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1, 100) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Mini Project: Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore CliffWalkingEnvUse the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code import gym env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function. ###Code import numpy as np from plot_utils import plot_values # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Prediction: State ValuesIn this section, you will write your own implementation of TD prediction (for estimating the state-value function).We will begin by investigating a policy where the agent moves:- `RIGHT` in states `0` through `10`, inclusive, - `DOWN` in states `11`, `23`, and `35`, and- `UP` in states `12` through `22`, inclusive, states `24` through `34`, inclusive, and state `36`.The policy is specified and printed below. Note that states where the agent does not choose an action have been marked with `-1`. ###Code policy = np.hstack([1*np.ones(11), 2, 0, np.zeros(10), 2, 0, np.zeros(10), 2, 0, -1*np.ones(11)]) print("\nPolicy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy.reshape(4,12)) ###Output Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1): [[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 2.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2.] [ 0. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]] ###Markdown Run the next cell to visualize the state-value function that corresponds to this policy. Make sure that you take the time to understand why this is the corresponding value function! ###Code V_true = np.zeros((4,12)) for i in range(3): V_true[0:12][i] = -np.arange(3, 15)[::-1] - i V_true[1][11] = -2 V_true[2][11] = -1 V_true[3][0] = -17 plot_values(V_true) ###Output _____no_output_____ ###Markdown The above figure is what you will try to approximate through the TD prediction algorithm.Your algorithm for TD prediction has five arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `policy`: This is a 1D numpy array with `policy.shape` equal to the number of states (`env.nS`). `policy[s]` returns the action that the agent chooses when in state `s`.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `V`: This is a dictionary where `V[s]` is the estimated value of state `s`.Please complete the function in the code cell below. ###Code from collections import defaultdict, deque import sys def td_prediction(env, num_episodes, policy, alpha, gamma=1.0): # initialize empty dictionaries of floats V = defaultdict(float) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() # initial state while True: # choose action A_t using policy Pi, take action A_t and observe S_t+1, R_t+1 new_state, reward, done, _ = env.step(policy[state]) V[state] = V[state] + alpha * (reward + gamma * V[new_state] - V[state]) state = new_state if done: break return V ###Output _____no_output_____ ###Markdown Run the code cell below to test your implementation and visualize the estimated state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code import check_test # evaluate the policy and reshape the state-value function V_pred = td_prediction(env, 5000, policy, .01) # please do not change the code below this line V_pred_plot = np.reshape([V_pred[key] if key in V_pred else 0 for key in np.arange(48)], (4,12)) check_test.run_check('td_prediction_check', V_pred_plot) plot_values(V_pred_plot) ###Output Episode 5000/5000 ###Markdown How close is your estimated state-value function to the true state-value function corresponding to the policy? You might notice that some of the state values are not estimated by the agent. This is because under this policy, the agent will not visit all of the states. In the TD prediction algorithm, the agent can only estimate the values corresponding to states that are visited. Part 2: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def choose_action_greedy(Q, nA, state, epsilon): # calc e-greedy policy policy_prob = np.ones(nA) * epsilon / nA most_action_idx = np.argmax(Q[state]) policy_prob[most_action_idx] = 1.0 - epsilon + epsilon / nA # choose action from policy action = np.random.choice(np.arange(nA), p=policy_prob) return action def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function nA = env.action_space.n epsilon = 1/(i_episode + 1) state = env.reset() action = choose_action_greedy(Q, nA, state, epsilon) while True: next_state, reward, done, _ = env.step(action) next_action = choose_action_greedy(Q, nA, next_state, epsilon) Q[state][action] = \ Q[state][action] + alpha * (reward + gamma * Q[next_state][next_action] - Q[state][action]) state, action = next_state, next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 1 / (i_episode + 1) state = env.reset() while True: action = choose_action_greedy(Q, env.nA, state, epsilon) next_state, reward, done, _ = env.step(action) Q[state][action] = \ Q[state][action] + alpha * (reward + gamma * np.max(Q[next_state]) - Q[state][action]) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 4: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def policy_prob_from_Qs(Q, nA, state, epsilon): # calc e-greedy policy policy_prob = np.ones(nA) * epsilon / nA most_action_idx = np.argmax(Q[state]) policy_prob[most_action_idx] = 1.0 - epsilon + epsilon / nA return policy_prob def expected_value(Q, state, policy_prob): return np.dot(policy_prob, Q[state]) def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 0.005 #max(0.005, 1 / (i_episode + 1)) state = env.reset() policy_prob = policy_prob_from_Qs(Q, env.nA, state, epsilon) while True: action = np.random.choice(np.arange(env.nA), p=policy_prob) next_state, reward, done, _ = env.step(action) policy_prob = policy_prob_from_Qs(Q, env.nA, next_state, epsilon) Q[state][action] = Q[state][action] \ + alpha * (reward + gamma * expected_value(Q, next_state, policy_prob) - Q[state][action]) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. !['cliff-walking'](cliff-walking-task.png) ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /home/bash/dev/env/tensorflow/deep/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def epsilon_random_policy(Q_state, nA, epsilon): probs = np.zeros(nA) if random.random() > epsilon: action = np.argmax(Q_state) probs[action] = 1 else: probs = np.ones(nA)*(1/nA) return probs def take_action(env, Q, state, epsilon): nA = env.action_space.n if state in Q: action_prob = epsilon_random_policy(Q[state], nA, epsilon) action = np.random.choice(nA, p=action_prob) else: action = env.action_space.sample() return action def monitor(i_episode, num_episodes): if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) epsilon, eps_decay, eps_min = init_eps() # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): monitor(i_episode, num_episodes) epsilon = 1/i_episode state = env.reset() while True: action = take_action(env, Q, state, epsilon) state_next, reward_next, done, info = env.step(action) action_next = take_action(env, Q, state_next, epsilon) Q[state][action] += alpha*(reward_next + gamma*Q[state_next][action_next] - Q[state][action]) if done: Q[state_next][action_next] += alpha*(-Q[state_next][action_next]) break state = state_next return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .015) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): Q = defaultdict(lambda: np.zeros(env.nA)) for i_episode in range(1, num_episodes+1): monitor(i_episode, num_episodes) state = env.reset() epsilon = 1/i_episode while True: action = take_action(env, Q, state, epsilon) next_state, reward, done, info = env.step(action) Q[state][action] += alpha*(reward + gamma*np.max(Q[next_state]) - Q[state][action]) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 2000, .015) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 2000/2000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def weights(Q_state, epsilon): ''' In case of greedy epsilon policy, the max value will have (1-epsilon)+average weight ''' nA = len(Q_state) weights = np.ones(nA)*epsilon/nA g_action = np.argmax(Q_state) weights[g_action] += 1-epsilon return weights def expected_sarsa(env, num_episodes, alpha, gamma=1.0): Q = defaultdict(lambda: np.zeros(env.nA)) for i_episode in range(1, num_episodes+1): monitor(i_episode, num_episodes) epsilon = 1/i_episode state = env.reset() while True: action = take_action(env, Q, state, epsilon) next_state, reward, done, meta = env.step(action) x = Q[next_state] w = weights(Q[next_state], epsilon) Q[state][action] += alpha*(reward + gamma*np.dot(x, w) - Q[state][action]) if done: break state = next_state return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 0.01) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random import math from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: return np.argmax(Q[state]) else: return random.choice(np.arange(env.action_space.n)) def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): current = Q[state][action] Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) new_value = current + (alpha * (target - current)) return new_value def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) avg_scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 state = env.reset() eps = 1.0 / i_episode action = epsilon_greedy(Q, state, nA, eps) while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state, next_action) state = next_state action = next_action if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward) tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): current = Q[state][action] Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 target = reward + (gamma * Qsa_next) new_value = current + (alpha * (target - current)) return new_value def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(nA)) # monitor performance tmp_scores = deque(maxlen=plot_every) avg_scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 state = env.reset() eps = 1.0 / i_episode while True: action = epsilon_greedy(Q, state, nA, eps) next_state, reward, done, info = env.step(action) score += reward Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state) state = next_state if done: tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] policy_s = np.ones(nA) * eps / nA policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) Qsa_next = np.dot(Q[next_state], policy_s) target = reward + (gamma * Qsa_next) new_value = current + (alpha * (target - current)) return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(nA)) # monitor performance tmp_scores = deque(maxlen=plot_every) avg_scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 state = env.reset() eps = 0.005 while True: action = epsilon_greedy(Q, state, nA, eps) next_state, reward, done, info = env.step(action) score += reward Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state) state = next_state if done: tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code """ ok, change, moved sarsa loop back down into function, make it a little more like the one they have.""" """ my version of the epsilon greedy next action based on a modified version of the original solution""" def epsilon_greedy_next_action(env, Q, state, epsilon, nA): return np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() """ got this from the last solution, it was sidestepped in this solution because they get the probabiliteis in the epsilon greedy action function""" # this is code from the course solution but now I get it, better python than I had def get_probs(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s """ this was my original sarsa update, different from the solution but may have worked""" def sarsa_update(Q_current, Q_next, alpha, reward): return Q_current + alpha * (reward + Q_next - Q_current) """ OK, I'm actually using this one and it is the one provided by the solution. only differences I see from mine are they actually use gamma but for me it was assumed to be 1.0 also, they zero out the value of the q table if the state has never been visited, may or may not be necessary. I like the way they split up the functions so use theirs""" def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" # print(f"state: {state}, action: {action}, actionvalue: {Q[state][action]}") current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step # print(f"next stateaction: {Q[state][action]} alpha: {alpha}") Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value """ OK, i'm using this one too and it is from the solution. I like this oneo compared to the one I was using I thought I neede to recreate the stepwise function, b tthey just used randomness in such a way that they figured out if they were greedy each time and then went for the max otherwise chose randomly. this more resembles my first attempt at this, which didn't work for other reasons I think.""" def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) eps_decay = .99999 eps_min = 0.05 epsilon = 1.0 Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # still doing this just like in monte carlo GLIE epsilon = 1.0 / i_episode # still getting a whole episode at a time, but need to update q inside this episode """ generates an episode from following the epsilon-greedy policy """ state = env.reset() # determine action from e-greedy policy action = epsilon_greedy(Q, state, env.nA, epsilon) while True: next_state, reward, done, info = env.step(action) if not done: next_action = epsilon_greedy(Q, next_state, env.nA, epsilon) # print(f"state: {state}, action: {action}, reward: {reward}, next_action: {next_action}, current Q at s,a: {Q[state][action]}") # print(f"Q[{state}][{action}] value is: {Q[state][action]}") Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state, next_action) # print(f"Q[{state}][{action}] value is: {Q[state][action]}") # print(f"new state: {next_state}, new Q value {Q[state][action]}") state = next_state action = next_action else: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 10000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 10000/10000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def greedy(Q, state): return np.argmax(Q[state]) def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # still doing this just like in monte carlo GLIE epsilon = 1.0 / i_episode # still getting a whole episode at a time, but need to update q inside this episode """ generates an episode from following the epsilon-greedy policy """ state = env.reset() # determine action from e-greedy policy action = epsilon_greedy(Q, state, env.nA, epsilon) while True: next_state, reward, done, info = env.step(action) if not done: next_action = epsilon_greedy(Q, next_state, env.nA, epsilon) next_update_action = greedy(Q, next_state) # print(f"state: {state}, action: {action}, reward: {reward}, next_action: {next_action}, current Q at s,a: {Q[state][action]}") # print(f"Q[{state}][{action}] value is: {Q[state][action]}") Q[state][action] = update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state, next_update_action) # print(f"Q[{state}][{action}] value is: {Q[state][action]}") # print(f"new state: {next_state}, new Q value {Q[state][action]}") state = next_state action = next_action else: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code # their way is much slicker, creating a separate vector with the weights for epsilon and then # doing the dot product with the next state's values to get the sum of multiples, that's smert def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) policy_s = np.ones(nA) * eps / nA # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step target = reward + (gamma * Qsa_next) # construct target new_value = current + (alpha * (target - current)) # get updated value return new_value def update_expected_sarsa(alpha, gamma, Q, state, action, reward, nA, eps, next_state=None): """Returns updated Q-value for the most recent experience.""" # print(f"state: {state}, action: {action}, actionvalue: {Q[state][action]}") current = Q[state][action] # estimate in Q-table (for current state, action pair) # calculate weighted sum of values at all actions non_greedy = eps * 1 / nA greedy = 1 - eps + eps * 1 / nA max_action_value = Q[next_state][np.argmax(Q[next_state])] rest_of_action_values = np.delete(Q[next_state], np.argmax(Q[next_state])) sum_other_actions = sum(action * non_greedy for action in rest_of_action_values) Qsa_next = greedy * max_action_value + sum_other_actions target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def their_expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Expected SARSA - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): step-size parameters for the update step gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 0.005 # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score # update Q Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # IMPORTANT the whole reason this didn't properly converge for me was this line # my sarsa update code was fine, my loop was alright but didn't need the next action part, # but the real reason this wouldn't converge was that my epsilon got too small too fast # so while apparently it was fine with q learning for epsilon to be 1 and then decrease # to an ever smaller number, like 1/10,000, but this did not work well with expected sarsa # high epsilon that trails off means you start not greedy at all and then end up greedy pretty # much all the time. constant low epsilon means you stay with a high chance of being greedy # almost the whole time. since Q learning is updating greedily but traveling with exploration, # making the policy more greedy over time makes sense because by the end you want your exploration to # match your q function. for expected starting greedy and staying greedy might make sense because even though you # are much more likely to take a greedy action, you are also more likely to take alternative actions into # account in your policy update, so there is less risk of getting inot a rut based on your q table. # epsilon = 1.0 / i_episode epsilon = 0.005 # still getting a whole episode at a time, but need to update q inside this episode """ generates an episode from following the epsilon-greedy policy """ state = env.reset() # determine action from e-greedy policy while True: action = epsilon_greedy(Q, state, nA, epsilon) next_state, reward, done, info = env.step(action) Q[state][action] = update_expected_sarsa(alpha, gamma, Q, state, action, reward, nA, epsilon, next_state) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy(Q, state, epsilon, nA): if np.random.random() > epsilon: action = np.argmax(Q[state]) else: action = env.action_space.sample() return action def update_Q_sarsa(Q, gamma, alpha, state, action, reward, next_state = None, next_action = None): current = Q[state][action] Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + gamma * Qsa_next new_Q = current + alpha * (target - current) return new_Q def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every = 100): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) avg_scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 state = env.reset() eps = 1 / i_episode action = epsilon_greedy(Q, state, eps, nA) while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = epsilon_greedy(Q, next_state, eps, nA) Q[state][action] = update_Q_sarsa(Q, gamma, alpha, state, action, reward, next_state, next_action) state = next_state action = next_action if done: Q[state][action] = update_Q_sarsa(Q, gamma, alpha, state, action, reward) tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 50000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 50000/50000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(Q, gamma, alpha, state, action, reward, next_state = None): current = Q[state][action] Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 target = reward + gamma * Qsa_next new_Q = current + alpha * (target - current) return new_Q # def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): # """Returns updated Q-value for the most recent experience.""" # current = Q[state][action] # estimate in Q-table (for current state, action pair) # Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 # value of next state # target = reward + (gamma * Qsa_next) # construct TD target # new_value = current + (alpha * (target - current)) # get updated value # return new_value # def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every = 100): # # initialize action-value function (empty dictionary of arrays) # nA = env.action_space.n # Q = defaultdict(lambda: np.zeros(env.nA)) # # initialize performance monitor # tmp_scores = deque(maxlen=plot_every) # avg_scores = deque(maxlen=num_episodes) # # loop over episodes # for i_episode in range(1, num_episodes+1): # # monitor progress # if i_episode % 100 == 0: # print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") # sys.stdout.flush() # score = 0 # state = env.reset() # eps = 1 / i_episode # action = epsilon_greedy(Q, state, eps, nA) # while True: # next_state, reward, done, info = env.step(action) # score += reward # if not done: # next_action = epsilon_greedy(Q, next_state, eps, nA) # Q[state][action] = update_Q_qlearning(Q, gamma, alpha, state, action, reward, next_state, next_action) # state = next_state # action = next_action # if done: # Q[state][action] = update_Q_qlearning(Q, gamma, alpha, state, action, reward) # tmp_scores.append(score) # break # if (i_episode % plot_every == 0): # avg_scores.append(np.mean(tmp_scores)) # # plot performance # plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) # plt.xlabel('Episode Number') # plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) # plt.show() # # print best 100-episode performance # print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) # return Q def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Q-Learning - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): learning rate gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score Q[state][action] = update_Q_sarsamax(Q, gamma, alpha, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) policy_s = np.ones(nA) * eps / nA # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step target = reward + (gamma * Qsa_next) # construct target new_value = current + (alpha * (target - current)) # get updated value return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Expected SARSA - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): step-size parameters for the update step gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 0.005 # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score # update Q Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def pick_action(Q_state, nA, eps): max_i = np.argmax(Q_state) probs = [(1 - eps + eps/nA) if i == max_i else eps/nA for i in range(nA)] return np.random.choice(range(nA), p=probs) def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes eps = eps_start for i_episode in range(1, num_episodes+1): eps = 1 / i_episode # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() S = env.reset() A = pick_action(Q[S], env.nA, eps) done = False while not done: S_next, R, done, info = env.step(A) A_next = pick_action(Q[S_next], env.nA, eps) Q[S][A] = (1 - alpha)*Q[S][A] + alpha * (R + gamma * Q[S_next][A_next]) S, A = S_next, A_next return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def pick_action(Q_state, nA, eps): max_i = np.argmax(Q_state) probs = [(1 - eps + eps/nA) if i == max_i else eps/nA for i in range(nA)] return np.random.choice(range(nA), p=probs) def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes eps = eps_start for i_episode in range(1, num_episodes+1): eps = max(eps_min, 1 / i_episode) # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() S = env.reset() A = pick_action(Q[S], env.nA, eps) done = False while not done: S_next, R, done, info = env.step(A) Q[S][A] = (1 - alpha)*Q[S][A] + alpha * (R + gamma * np.max(Q[S_next])) A_next = pick_action(Q[S_next], env.nA, eps) S, A = S_next, A_next return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def pick_action(Q_state, nA, eps): max_i = np.argmax(Q_state) probs = get_probs(Q_state, nA, eps) return np.random.choice(range(nA), p=probs) def get_probs(Q_state, nA, eps): max_i = np.argmax(Q_state) return [(1 - eps + eps/nA) if i == max_i else eps/nA for i in range(nA)] def expected_sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes eps = eps_start for i_episode in range(1, num_episodes+1): eps = 0.005 #1 / i_episode # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() S = env.reset() A = pick_action(Q[S], env.nA, eps) done = False while not done: S_next, R, done, info = env.step(A) Q_exp = sum(np.array(get_probs(Q[S_next], env.nA, eps))*np.array(Q[S_next])) Q[S][A] = (1 - alpha)*Q[S][A] + alpha * (R + gamma * Q_exp) A_next = pick_action(Q[S_next], env.nA, eps) S, A = S_next, A_next return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt from tqdm import tqdm %matplotlib inline import check_test from plot_utils import plot_values %load_ext lab_black ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make("CliffWalking-v0") ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4, 12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def get_epsilon_greedy_action(Q: np.ndarray, epsilon: float, nA: int) -> np.ndarray: result = np.full(nA, epsilon / nA) best_action = Q.argmax() result[best_action] += 1 - epsilon return result def update_epsilon( epsilon: float, epsilon_decay: float = 0.9999, min_epsilon: float = 0.05 ) -> float: return np.maximum(epsilon * epsilon_decay, min_epsilon) def get_action(Q: defaultdict, state, epsilon: float, nA: int) -> np.ndarray: action_probs = get_epsilon_greedy_action(Q[state], epsilon, nA) return np.random.choice(np.arange(nA), p=action_probs) def update_Q_sarsa(Q, state, action, reward, next_state, next_action, gamma, alpha): new_estimate = reward + gamma * Q[next_state][next_action] Q[state][action] += alpha * (new_estimate - Q[state][action]) return Q def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in tqdm(range(1, num_episodes + 1)): epsilon = 1 / i_episode state = env.reset() action = get_action(Q, state, epsilon, env.nA) while True: next_state, reward, done, info = env.step(action) if not done: next_action = get_action(Q, next_state, epsilon, env.nA) Q = update_Q_sarsa( Q, state, action, reward, next_state, next_action, gamma, alpha ) state = next_state action = next_action else: Q = update_Q_sarsa( Q, state, action, reward, next_state, next_action, gamma, alpha ) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 7_500, alpha=0.01) # print the estimated optimal policy policy_sarsa = np.array( [np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)] ).reshape(4, 12) check_test.run_check("td_control_check", policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = [np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)] plot_values(V_sarsa) ###Output 100%|██████████| 7500/7500 [00:09<00:00, 786.68it/s] ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_qlearning(Q, state, action, reward, next_state, gamma, alpha): best_action = Q[next_state].argmax() new_estimate = reward + gamma * Q[next_state][best_action] Q[state][action] += alpha * (new_estimate - Q[state][action]) return Q def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in tqdm(range(1, num_episodes + 1)): epsilon = 1 / i_episode state = env.reset() action = get_action(Q, state, epsilon, env.nA) while True: action = get_action(Q, state, epsilon, env.nA) next_state, reward, done, info = env.step(action) if not done: Q = update_Q_qlearning( Q, state, action, reward, next_state, gamma, alpha ) state = next_state else: Q = update_Q_qlearning( Q, state, action, reward, next_state, gamma, alpha ) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, 0.01) # print the estimated optimal policy policy_sarsamax = np.array( [np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)] ).reshape((4, 12)) check_test.run_check("td_control_check", policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values( [np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)] ) ###Output 100%|██████████| 5000/5000 [00:09<00:00, 546.30it/s] ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expected_sarsa( Q, state, action, reward, next_state, gamma, alpha, epsilon, nA ): policy = get_epsilon_greedy_action(Q[state], epsilon, nA) expected_q = (policy * Q[next_state]).sum() new_estimate = reward + gamma * expected_q Q[state][action] += alpha * (new_estimate - Q[state][action]) return Q def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in tqdm(range(1, num_episodes + 1)): epsilon = 1 / i_episode state = env.reset() action = get_action(Q, state, epsilon, env.nA) while True: action = get_action(Q, state, epsilon, env.nA) next_state, reward, done, info = env.step(action) if not done: Q = update_Q_expected_sarsa( Q, state, action, reward, next_state, gamma, alpha, epsilon, env.nA ) state = next_state else: Q = update_Q_expected_sarsa( Q, state, action, reward, next_state, gamma, alpha, epsilon, env.nA ) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 0.01) # print the estimated optimal policy policy_expsarsa = np.array( [np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)] ).reshape(4, 12) check_test.run_check("td_control_check", policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values( [np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)] ) ###Output 100%|██████████| 10000/10000 [00:14<00:00, 711.96it/s] ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output _____no_output_____ ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def generate_episode(env,Q,eps): state = env.reset() while True: probs = get_probs(env,state,Q,eps) action = np.random.choice(np.arrage(env.action_space.n),p=probs) next_state, reward, done, info = env.step(action) episode.append((state, action, reward)) state = next_state if done: break return episode def get_probs(env,state,Q,eps): n = env.action_space.n argmax = np.argmax(Q[state]) probs = [epsilon/n if i != argmax else 1 - epsilon + epsilon/n for i in range(n)] return probs def update_Q(Q, count, state, action, reward, alpha, gamma): if state in Q.keys(): old_Q = Q[state][action] Q[state][action] += alpha*(reward*(gamma**count) - old_Q) return Q def sarsa(env, num_episodes, alpha, gamma=1.0, beign_eps=1.0, decayeps=0.9999,min_eps=0.10): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) eps = beign_eps # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function episode = generate_episode(env,Q,eps) for i, sample in enumerate(episode): state, action, reward = sample Q = update_Q(Q, i, state, action, reward, alpha, gamma) eps = max(eps*decayeps,min_eps) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import warnings warnings.filterwarnings('ignore') # Ignores all warnings. import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 1.0 / i_episode state = env.reset() while True: # Choose the next action using epsilon-greedy MC control method. if state in Q: probs = np.ones(env.nA) * epsilon / env.nA best_action = np.argmax(Q[state]) probs[best_action] += 1 - epsilon action = np.random.choice(np.arange(env.nA), p=probs) else: action = np.random.choice(np.arange(env.nA)) next_state, reward, done, info = env.step(action) # Find out the expected return based on the best next action. if next_state in Q: next_action = np.argmax(Q[next_state]) G_t = Q[next_state][next_action] else: # Assumes the agent reaches the goal following the best policy. G_t = 0 Q[state][action] += alpha * (reward + gamma*G_t - Q[state][action]) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 1.0 / i_episode state = env.reset() if state in Q: probs = np.ones(env.nA) * epsilon / env.nA best_action = np.argmax(Q[state]) probs[best_action] += 1 - epsilon action = np.random.choice(np.arange(env.nA), p=probs) else: action = np.random.choice(np.arange(env.nA)) while True: next_state, reward, done, info = env.step(action) # Find out the expected return based on the best next action. if next_state in Q: next_action = np.argmax(Q[next_state]) G_t = Q[next_state][next_action] else: # Assumes the agent reaches the goal following the best policy. next_action = np.random.choice(np.arange(env.nA)) G_t = 0 Q[state][action] += alpha * (reward + gamma*G_t - Q[state][action]) state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = 0.005 #1.0 / i_episode state = env.reset() if state in Q: probs = np.ones(env.nA) * epsilon / env.nA best_action = np.argmax(Q[state]) probs[best_action] += 1 - epsilon action = np.random.choice(np.arange(env.nA), p=probs) else: action = np.random.choice(np.arange(env.nA)) while True: next_state, reward, done, info = env.step(action) # Find out the expected return based on the best next action. probs = np.ones(env.nA) * epsilon / env.nA if next_state in Q: next_action = np.argmax(Q[next_state]) probs[next_action] = 1 - epsilon + epsilon/env.nA G_t = np.dot(Q[next_state], probs) else: # Assumes the agent reaches the goal following the best policy. next_action = np.random.choice(np.arange(env.nA)) G_t = 0 Q[state][action] += alpha * (reward + gamma*G_t - Q[state][action]) state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import pdb import check_test from plot_utils import plot_values from tqdm import trange ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0] = -np.arange(3, 15)[::-1] V_opt[1] = -np.arange(3, 15)[::-1] + 1 V_opt[2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code # def eps_greedy_action(Q, state, eps): # if state in Q: # probs = np.ones(env.nA) * eps / env.nA # best_a = np.argmax(Q[state]) # probs[best_a] = 1 - eps + eps / env.nA # else: # probs = np.ones(env.nA) / env.nA # return np.random.choice(np.arange(env.nA), p=probs) # the above implementation does not provide stable actions when there is a tie! def eps_greedy_action(Q, state, eps): if np.random.random() > eps: return np.argmax(Q[state]) else: return env.action_space.sample() def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in trange(1, num_episodes+1): ## TODO: complete the function eps = max(1. / i_episode, .05) state = env.reset() action = eps_greedy_action(Q, state, eps) while True: new_state, reward, done, info = env.step(action) new_action = eps_greedy_action(Q, new_state, eps) Q[state][action] += alpha * (reward + gamma * Q[new_state][new_action] - Q[state][action]) state, action = new_state, new_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 15000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output 100%|██████████| 15000/15000 [00:08<00:00, 1855.18it/s] ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def greedy_action(Q, state): return np.argmax(Q[state]) def eps_greedy_action(Q, state, eps): if np.random.random() > eps: return greedy_action(Q, state) else: return np.random.choice(np.arange(env.nA)) def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in trange(1, num_episodes+1): eps = max(1. / i_episode, .05) state = env.reset() while True: action = eps_greedy_action(Q, state, eps) new_state, reward, done, info = env.step(action) best_action = greedy_action(Q, state) # target = reward + gamma * Q[new_state][best_action] target = reward + gamma * np.max(Q[new_state]) Q[state][action] += alpha * (target - Q[state][action]) # Q[state][action] *= (1-alpha) # Q[state][action] += alpha * target state = new_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output 100%|██████████| 5000/5000 [00:06<00:00, 832.13it/s] ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def greedy_action(Q, state): return np.argmax(Q[state]) def eps_greedy_action(Q, state, eps): if np.random.random() > eps: return greedy_action(Q, state) else: return np.random.choice(np.arange(env.nA)) def expected_value(Q, state, eps): probs = np.ones(env.nA) * eps / env.nA best_a = greedy_action(Q, state) probs[best_a] = 1 - eps + eps / env.nA return np.dot(Q[state], probs) def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in trange(1, num_episodes+1): state = env.reset() eps = .005 # max(1./ i_episode, .05) while True: action = eps_greedy_action(Q, state, eps) new_state, reward, done, info = env.step(action) target = reward + gamma * expected_value(Q, new_state, eps) Q[state][action] += alpha * (target - Q[state][action]) state = new_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output 100%|██████████| 5000/5000 [00:03<00:00, 1657.71it/s] ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values # Add-on : Hide Matplotlib deprecate warnings import warnings warnings.filterwarnings("ignore") # High resolution plot outputs for retina display %config InlineBackend.figure_format = 'retina' ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def select_greedy_action(Q, state, nA, epsilon): """Select an epsilon-greedy action for the current state.""" if np.random.random() > epsilon: # Select the greedy action return np.argmax(Q[state]) else: # Select randomly an action return np.random.choice(np.arange(nA)) def update_Q_sarsa(Q, alpha, gamma, state, action, reward, next_state=None, next_action=None): """ updates the Q-Table (action-value function estimate) using the most recent episode """ # Backup current Q for state, action Qsa = Q[state][action] # Retrieve Q for next_state, next action (If end of episode / next_state is None, then return 0) Qsa_next = Q[next_state][next_action] if next_state is not None else 0 # Update the Q-Table Qsa = Qsa + alpha * (reward + gamma * Qsa_next - Qsa) return Qsa def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # See https://docs.python.org/3.6/library/collections.html#collections.deque tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # Initialize environment state = env.reset() score =0 # Select an available action for the state by following the epsilon-greedy policy epsilon = 1.0 / i_episode action = select_greedy_action(Q, state, env.nA, epsilon) # Loop until the episode terminates while True: # Take the action and update the episode next_state, reward, done, info = env.step(action) # Update the score of the agent with the current reward score += reward if not done: # Select the next action for the state by following the epsilon-greedy policy next_action = select_greedy_action(Q, next_state, env.nA, epsilon) # Update the Q_table Q[state][action] = update_Q_sarsa(Q, alpha, gamma, state, action, reward, next_state, next_action) # Update state and action state = next_state action = next_action else: # The episode is finished # Update the Q_table Q[state][action] = update_Q_sarsa(Q, alpha, gamma, state, action, reward) # Backup the agent's score tmp_scores.append(score) break # Compute Average scores statistics if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores), color='magenta') plt.xlabel('Episode Number', color='magenta') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every, color='magenta') plt.tick_params(axis='x', colors='magenta') plt.tick_params(axis='y', colors='magenta') plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code # select_greedy_action is identical as in Part 1 def select_greedy_action(Q, state, nA, epsilon): """Select an epsilon-greedy action for the current state.""" if np.random.random() > epsilon: # Select the greedy action return np.argmax(Q[state]) else: # Select randomly an action return np.random.choice(np.arange(nA)) def update_Q_sarsamax(Q, alpha, gamma, state, action, reward, next_state=None): """ updates the Q-Table (action-value function estimate) using the most recent episode """ # Backup current Q for state, action Qsa = Q[state][action] # Retrieve highest Q estimate for next_state (If end of episode / next_state is None, then return 0) Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 # Update the Q-Table Qsa = Qsa + alpha * ((reward + gamma * Qsa_next) - Qsa) return Qsa def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # See https://docs.python.org/3.6/library/collections.html#collections.deque tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # Initialize environment state = env.reset() score = 0 # set epsilon por the epsilon-greedy policy epsilon = 1.0 / i_episode # Loop until the episode terminates while True: # Select an available action for the state by following the epsilon-greedy policy action = select_greedy_action(Q, state, env.nA, epsilon) # Take the action and update the episode next_state, reward, done, info = env.step(action) # Update the score of the agent with the current reward score += reward # Update the Q_table Q[state][action] = update_Q_sarsamax(Q, alpha, gamma, state, action, reward, next_state) # Update state state = next_state if done: # The episode is finished # Backup the agent's score tmp_scores.append(score) break # Compute Average scores statistics if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores), color='magenta') plt.xlabel('Episode Number', color='magenta') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every, color='magenta') plt.tick_params(axis='x', colors='magenta') plt.tick_params(axis='y', colors='magenta') plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code # select_greedy_action is identical as in Part 1 def select_greedy_action(Q, state, nA, epsilon): """Select an epsilon-greedy action for the current state.""" if np.random.random() > epsilon: # Select the greedy action return np.argmax(Q[state]) else: # Select randomly an action return np.random.choice(np.arange(nA)) def get_action_probs(Qs, epsilon, nA): """" Get the action probabilities for the epsilon-greedy policy""" # With probability epsilon, the agent will select an action uniformly # at random from the set of available (non-greedy AND greedy) actions pi_s = np.ones(nA) * epsilon / nA # With probability (1 - epsilon), the agent will select the greedy action best_a = np.argmax(Qs) pi_s[best_a] = (1 - epsilon) + (epsilon / nA) return pi_s def update_Q_expected_sarsa(Q, alpha, gamma, state, action, reward, epsilon, nA, next_state=None): """ updates the Q-Table (action-value function estimate) using the most recent episode """ # Backup current Q for state, action Qsa = Q[state][action] # Retrieve the expected value for the next_state (If end of episode / next_state is None, then return 0) Qsa_next = np.dot(Q[next_state],get_action_probs(Q[next_state], epsilon, nA)) if next_state is not None else 0 # Update the Q-Table Qsa = Qsa + alpha * ((reward + gamma * Qsa_next) - Qsa) return Qsa def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # See https://docs.python.org/3.6/library/collections.html#collections.deque tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # Initialize environment state = env.reset() score = 0 # set epsilon por the epsilon-greedy policy #epsilon = 1.0 / i_episode ==> Does not provide good results in this case epsilon = 0.005 # Loop until the episode terminates while True: # Select an available action for the state by following the epsilon-greedy policy action = select_greedy_action(Q, state, env.nA, epsilon) # Take the action and update the episode next_state, reward, done, info = env.step(action) # Update the score of the agent with the current reward score += reward # Update the Q_table Q[state][action] = update_Q_expected_sarsa(Q, alpha, gamma, state, action, reward, epsilon, env.nA, next_state) # Update state state = next_state if done: # The episode is finished # Backup the agent's score tmp_scores.append(score) break # Compute Average scores statistics if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores), color='magenta') plt.xlabel('Episode Number', color='magenta') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every, color='magenta') plt.tick_params(axis='x', colors='magenta') plt.tick_params(axis='y', colors='magenta') plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function #Q_expsarsa = expected_sarsa(env, 10000, 1) Q_expsarsa = expected_sarsa(env, 5000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) env.reset() env.step(1) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import numpy as np from collections import defaultdict ### This is an agent that lives in a discrete world (finite states), with a finite action space ### ### This is the learner ### class Agent: def __init__(self, action_space, epsilon, gamma, alpha): # An np.array with all the possible actions for the agent self.actionSpace = action_space # This is the agent policy self.policy = defaultdict(lambda: np.zeros(len(action_space))) # The Q table initied with zeros for all states and actions self.QTable = defaultdict(lambda: np.zeros(len(action_space))) # The N table initied with zeros for all states and actions self.NTable = defaultdict(lambda: np.zeros(len(action_space))) # This is the sum of all the returns the agent received over the episode by selecting (state,action) self.returns_sum = defaultdict(lambda: np.zeros(len(action_space))) # Agent learning parameters self.epsilon, self.gamma, self.alpha = epsilon, gamma, alpha # Receives a list with (state, action, reward) def updateQTable_firstVisitMCPrediction(self, episode): """ actualizes the value of QTable according to the First-Visit MC Prediction algorithm """ states, actions, rewards = zip(*episode) discounts = np.array([self.gamma**i for i in range(len(rewards)+1)]) for i, state in enumerate(states): self.returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)]) self.NTable[state][actions[i]] += 1.0 self.QTable[state][actions[i]] = self.returns_sum[state][actions[i]] / self.NTable[state][actions[i]] def updateQTable_firstVisitMCControl_alphaConstant(self, episode): """ updates the action-value function estimate using the most recent episode """ states, actions, rewards = zip(*episode) # prepare for discounting discounts = np.array([self.gamma**i for i in range(len(rewards)+1)]) for i, state in enumerate(states): old_Q = self.QTable[state][actions[i]] self.QTable[state][actions[i]] = old_Q + self.alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q) def updateQTable_Sarsa(self): pass ## Can not be defined unless we have an standarized way to communicate with the environment # Receives a row from QTable def getProbabilities_egreedyPolicy(self, Q_state): """ obtains the action probabilities corresponding to epsilon-greedy policy """ # Make an uniform distribution policy for state S and multiply by epsilon to scale it according to the e-greedy policy probs_s = np.ones(len(self.actionSpace)) * self.epsilon / len(self.actionSpace) # Get the best action according to QTable for state S best_a = np.argmax(Q_state) # Put probability of best action according to a e-greedy policy probs_s[best_a] = 1 - self.epsilon + (self.epsilon / len(self.actionSpace)) return probs_s def updatePolicy(self): for state in list(self.QTable.keys()): self.policy[state] = np.argmax(self.QTable[state]) def updateEpsilon(self, epsilon): self.epsilon = epsilon def actionSpace_sample(self): return np.random.choice(self.actionSpace, p=np.ones(len(self.actionSpace))/len(self.actionSpace)) def decide_egreedy(self, state): if state in self.QTable: Q_state = self.QTable[state] action = np.random.choice(self.actionSpace, p=self.getProbabilities_egreedyPolicy(Q_state)) else: action = self.actionSpace_sample() return action def sarsa(env, num_episodes, alpha, gamma=1.0): agent = Agent(np.arange(env.action_space.n), epsilon=1, gamma=gamma, alpha=alpha) terminal_state = 47 # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function agent.updateEpsilon(1/i_episode) state = env.reset() action = agent.decide_egreedy(state) while True: state_old = state action_old = action state, reward, boolean, prob = env.step(action_old) action = agent.decide_egreedy(state) Q_old = agent.QTable[state_old][action_old] agent.QTable[state_old][action_old] = Q_old + agent.alpha*(reward + agent.gamma*agent.QTable[state][action] - Q_old) if state == terminal_state: break return agent.QTable ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import numpy as np from collections import defaultdict ### This is an agent that lives in a discrete world (finite states), with a finite action space ### ### This is the learner ### class Agent: def __init__(self, action_space, epsilon, gamma, alpha): # An np.array with all the possible actions for the agent self.actionSpace = action_space # This is the agent policy self.policy = defaultdict(lambda: np.zeros(len(action_space))) # The Q table initied with zeros for all states and actions self.QTable = defaultdict(lambda: np.zeros(len(action_space))) # The N table initied with zeros for all states and actions self.NTable = defaultdict(lambda: np.zeros(len(action_space))) # This is the sum of all the returns the agent received over the episode by selecting (state,action) self.returns_sum = defaultdict(lambda: np.zeros(len(action_space))) # Agent learning parameters self.epsilon, self.gamma, self.alpha = epsilon, gamma, alpha # Receives a list with (state, action, reward) def updateQTable_firstVisitMCPrediction(self, episode): """ actualizes the value of QTable according to the First-Visit MC Prediction algorithm """ states, actions, rewards = zip(*episode) discounts = np.array([self.gamma**i for i in range(len(rewards)+1)]) for i, state in enumerate(states): self.returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)]) self.NTable[state][actions[i]] += 1.0 self.QTable[state][actions[i]] = self.returns_sum[state][actions[i]] / self.NTable[state][actions[i]] def updateQTable_firstVisitMCControl_alphaConstant(self, episode): """ updates the action-value function estimate using the most recent episode """ states, actions, rewards = zip(*episode) # prepare for discounting discounts = np.array([self.gamma**i for i in range(len(rewards)+1)]) for i, state in enumerate(states): old_Q = self.QTable[state][actions[i]] self.QTable[state][actions[i]] = old_Q + self.alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q) def updateQTable_Sarsa(self): pass ## Can not be defined unless we have an standarized way to communicate with the environment # Receives a row from QTable def getProbabilities_egreedyPolicy(self, Q_state): """ obtains the action probabilities corresponding to epsilon-greedy policy """ # Make an uniform distribution policy for state S and multiply by epsilon to scale it according to the e-greedy policy probs_s = np.ones(len(self.actionSpace)) * self.epsilon / len(self.actionSpace) # Get the best action according to QTable for state S best_a = np.argmax(Q_state) # Put probability of best action according to a e-greedy policy probs_s[best_a] = 1 - self.epsilon + (self.epsilon / len(self.actionSpace)) return probs_s def updatePolicy(self): for state in list(self.QTable.keys()): self.policy[state] = np.argmax(self.QTable[state]) def updateEpsilon(self, epsilon): self.epsilon = epsilon def actionSpace_sample(self): return np.random.choice(self.actionSpace, p=np.ones(len(self.actionSpace))/len(self.actionSpace)) def decide_egreedy(self, state): if state in self.QTable: Q_state = self.QTable[state] action = np.random.choice(self.actionSpace, p=self.getProbabilities_egreedyPolicy(Q_state)) else: action = self.actionSpace_sample() return action def decide_greedy(self, state): if state in self.QTable: Q_state = self.QTable[state] action = np.argmax(Q_state) else: action = self.actionSpace_sample() return action def q_learning(env, num_episodes, alpha, gamma=1.0): agent = Agent(np.arange(env.action_space.n), epsilon=1, gamma=gamma, alpha=alpha) terminal_state = 47 # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function agent.updateEpsilon(1/i_episode) state = env.reset() action = agent.decide_egreedy(state) while True: state_old = state action_old = action state, reward, boolean, prob = env.step(action_old) action = agent.decide_greedy(state) # This is only for the max_{a}Q[s][a] computation Q_old = agent.QTable[state_old][action_old] agent.QTable[state_old][action_old] = Q_old + agent.alpha*(reward + agent.gamma*agent.QTable[state][action] - Q_old) action = agent.decide_egreedy(state) if state == terminal_state: break return agent.QTable ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import numpy as np from collections import defaultdict ### This is an agent that lives in a discrete world (finite states), with a finite action space ### ### This is the learner ### class Agent: def __init__(self, action_space, epsilon, gamma, alpha): # An np.array with all the possible actions for the agent self.actionSpace = action_space # This is the agent policy self.policy = defaultdict(lambda: np.zeros(len(action_space))) # The Q table initied with zeros for all states and actions self.QTable = defaultdict(lambda: np.zeros(len(action_space))) # The N table initied with zeros for all states and actions self.NTable = defaultdict(lambda: np.zeros(len(action_space))) # This is the sum of all the returns the agent received over the episode by selecting (state,action) self.returns_sum = defaultdict(lambda: np.zeros(len(action_space))) # Agent learning parameters self.epsilon, self.gamma, self.alpha = epsilon, gamma, alpha # Receives a list with (state, action, reward) def updateQTable_firstVisitMCPrediction(self, episode): """ actualizes the value of QTable according to the First-Visit MC Prediction algorithm """ states, actions, rewards = zip(*episode) discounts = np.array([self.gamma**i for i in range(len(rewards)+1)]) for i, state in enumerate(states): self.returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)]) self.NTable[state][actions[i]] += 1.0 self.QTable[state][actions[i]] = self.returns_sum[state][actions[i]] / self.NTable[state][actions[i]] def updateQTable_firstVisitMCControl_alphaConstant(self, episode): """ updates the action-value function estimate using the most recent episode """ states, actions, rewards = zip(*episode) # prepare for discounting discounts = np.array([self.gamma**i for i in range(len(rewards)+1)]) for i, state in enumerate(states): old_Q = self.QTable[state][actions[i]] self.QTable[state][actions[i]] = old_Q + self.alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q) def updateQTable_Sarsa(self): pass ## Can not be defined unless we have an standarized way to communicate with the environment # Receives a row from QTable def getProbabilities_egreedyPolicy(self, Q_state): """ obtains the action probabilities corresponding to epsilon-greedy policy """ # Make an uniform distribution policy for state S and multiply by epsilon to scale it according to the e-greedy policy probs_s = np.ones(len(self.actionSpace)) * self.epsilon / len(self.actionSpace) # Get the best action according to QTable for state S best_a = np.argmax(Q_state) # Put probability of best action according to a e-greedy policy probs_s[best_a] = 1 - self.epsilon + (self.epsilon / len(self.actionSpace)) return probs_s def updatePolicy(self): for state in list(self.QTable.keys()): self.policy[state] = np.argmax(self.QTable[state]) def updateEpsilon(self, epsilon): self.epsilon = epsilon def actionSpace_sample(self): return np.random.choice(self.actionSpace, p=np.ones(len(self.actionSpace))/len(self.actionSpace)) def decide_egreedy(self, state): if state in self.QTable: Q_state = self.QTable[state] action = np.random.choice(self.actionSpace, p=self.getProbabilities_egreedyPolicy(Q_state)) else: action = self.actionSpace_sample() return action def decide_greedy(self, state): if state in self.QTable: Q_state = self.QTable[state] action = np.argmax(Q_state) else: action = self.actionSpace_sample() return action def expected_sarsa(env, num_episodes, alpha, gamma=1.0): agent = Agent(np.arange(env.action_space.n), epsilon=0.005, gamma=gamma, alpha=alpha) terminal_state = 47 # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() action = agent.decide_egreedy(state) while True: state_old = state action_old = action state, reward, boolean, prob = env.step(action_old) Q_old = agent.QTable[state_old][action_old] # As we are using an egreedy policy, the probabilities are 1-eps for selecting the argmax(Q) and eps for # selecting a random action. Therefore, expected action is sum(probs[state][a]*Q[state][a] for a in actions). expected_value = np.dot(agent.QTable[state], agent.getProbabilities_egreedyPolicy(agent.QTable[state])) agent.QTable[state_old][action_old] = Q_old + agent.alpha*(reward + agent.gamma*expected_value - Q_old) action = agent.decide_egreedy(state) if state == terminal_state: break return agent.QTable ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /Applications/anaconda3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /Applications/anaconda3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /Applications/anaconda3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) /Applications/anaconda3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warnings.warn(message, mplDeprecation, stacklevel=1) ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy_from_Q(env, Q, epsilon, state): return np.argmax(Q[state]) if np.random.uniform() >= epsilon else env.action_space.sample() def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 1.0/i_episode state = env.reset() action = epsilon_greedy_from_Q(env, Q, epsilon, state) while True: next_state, reward, done, info = env.step(action) if not done: next_action = epsilon_greedy_from_Q(env, Q, epsilon, next_state) Q[state][action] += alpha*(reward + gamma*Q[next_state][next_action] - Q[state][action]) state = next_state action = next_action else: Q[state][action] += alpha*(reward - Q[state][action]) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def epsilon_greedy_from_Q(env, Q, epsilon, state): return np.argmax(Q[state]) if np.random.uniform() >= epsilon else env.action_space.sample() def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 1.0/i_episode state = env.reset() while True: action = epsilon_greedy_from_Q(env, Q, epsilon, state) next_state, reward, done, info = env.step(action) if not done: Q[state][action] += alpha*(reward + gamma*max(Q[next_state]) - Q[state][action]) state = next_state else: Q[state][action] += alpha*(reward - Q[state][action]) break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Qs(env, Q, epsilon, gamma, alpha, nA, state, action, reward, next_state=None): policy = np.ones(nA)*epsilon/nA policy[np.argmax(Q[next_state])] = 1 - epsilon + epsilon/nA return Q[state][action] + alpha*(reward + gamma*np.dot(Q[next_state], policy) - Q[state][action]) def expected_sarsa(env, num_episodes, alpha, gamma=1.0): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = 1.0/i_episode state = env.reset() while True: action = epsilon_greedy_from_Q(env, Q, epsilon, state) next_state, reward, done, info = env.step(action) Q[state][action] = update_Qs(env, Q, epsilon, gamma, alpha, nA, state, action, reward, next_state) state = next_state if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 50000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 50000/50000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function. ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code from math import tanh # Trying a different implemtation of epsilon in epsilon_greedy_probs wich uses a tanh function. # This leads to a softer drop of the value of epsilon in the first fraction of episodes. def update_Q(Qsa, Qsa_next, reward, alpha, gamma): """ updates the action-value function estimate using the most recent time step """ return Qsa + (alpha * (reward + (gamma * Qsa_next) - Qsa)) def epsilon_greedy_probs(env, Q_s, i_episode, eps=None, epsilon_min = 0.01, num_episodes = 5000): """ obtains the action probabilities corresponding to epsilon-greedy policy """ #epsilon = 1.0 / i_episode epsilon = epsilon_min+(1.0-epsilon_min)*(1-tanh(10*(i_episode/num_episodes))) if eps is not None: epsilon = eps policy_s = np.ones(env.nA) * epsilon / env.nA policy_s[np.argmax(Q_s)] = 1 - epsilon + (epsilon / env.nA) return policy_s ###Output _____no_output_____ ###Markdown New epsilon_greedy_probsThe above implementation of calculating epsilon based on tanh seems only to work with * Part 2: TD Control: Q-learning* Part 3: TD Control: Expected SarsaDunno why though. ###Code def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # begin an episode, observe S state = env.reset() # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[state], i_episode) # pick action A action = np.random.choice(np.arange(env.nA), p=policy_s) # limit number of time steps per episode for t_step in np.arange(300): # take action A, observe R, S' next_state, reward, done, info = env.step(action) # add reward to score score += reward if not done: # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[next_state], i_episode) # pick next action A' next_action = np.random.choice(np.arange(env.nA), p=policy_s) # update TD estimate of Q Q[state][action] = update_Q(Q[state][action], Q[next_state][next_action], reward, alpha, gamma) # S <- S' state = next_state # A <- A' action = next_action if done: # update TD estimate of Q Q[state][action] = update_Q(Q[state][action], 0, reward, alpha, gamma) # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # begin an episode, observe S state = env.reset() while True: # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[state], i_episode) # pick next action A action = np.random.choice(np.arange(env.nA), p=policy_s) # take action A, observe R, S' next_state, reward, done, info = env.step(action) # add reward to score score += reward # update Q Q[state][action] = update_Q(Q[state][action], np.max(Q[next_state]), \ reward, alpha, gamma) # S <- S' state = next_state # until S is terminal if done: # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # begin an episode state = env.reset() # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[state], i_episode, 0.005) while True: # pick next action action = np.random.choice(np.arange(env.nA), p=policy_s) # take action A, observe R, S' next_state, reward, done, info = env.step(action) # add reward to score score += reward # get epsilon-greedy action probabilities (for S') policy_s = epsilon_greedy_probs(env, Q[next_state], i_episode, 0.005) # update Q Q[state][action] = update_Q(Q[state][action], np.dot(Q[next_state], policy_s), \ reward, alpha, gamma) # S <- S' state = next_state # until S is terminal if done: # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0][0:13] = -np.arange(3, 15)[::-1] V_opt[1][0:13] = -np.arange(3, 15)[::-1] + 1 V_opt[2][0:13] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import numpy as np from math import floor def sarsa(env, num_episodes, alpha, gamma=1.0): epsilon = 0.1 policy = Sarsa(env, alpha, gamma, epsilon) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() run_episode(env, policy) return policy.Q def run_episode(env, policy): state = env.reset() done = False while not done: action = policy.act(state) next_state, reward, done, info = env.step(action) policy.learn(state, action, reward, next_state) state = next_state class Sarsa(object): def __init__(self, env, alpha, gamma, epsilon): # initialize action-value function (empty dictionary of arrays) self.Q = defaultdict(lambda: np.zeros(env.nA)) self.nA = env.nA self.alpha = alpha self.gamma = gamma self.epsilon = epsilon def learn(self, s0, a0, r1, s1): a1 = self.act(s1) q_obs_est = r1 + self.gamma * self.Q[s1][a1] q_old = self.Q[s0][a0] self.Q[s0][a0] += self.alpha * (q_obs_est - q_old) self.epsilon *= 0.9999 def act(self, s): if self.epsilon > 0.000001: action = floor(np.random.rand() / (self.epsilon / self.nA)) if action >= self.nA: # Choose greedily action = np.argmax(self.Q[s]) else: action = np.argmax(self.Q[s]) return action ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import numpy as np from math import floor def q_learning(env, num_episodes, alpha, gamma=1.0): epsilon = 0.1 policy = QLearning(env, alpha, gamma, epsilon) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() run_episode(env, policy) return policy.Q def run_episode(env, policy): state = env.reset() done = False while not done: action = policy.act(state) next_state, reward, done, info = env.step(action) policy.learn(state, action, reward, next_state) state = next_state class QLearning(object): def __init__(self, env, alpha, gamma, epsilon): # initialize action-value function (empty dictionary of arrays) self.Q = defaultdict(lambda: np.zeros(env.nA)) self.nA = env.nA self.alpha = alpha self.gamma = gamma self.epsilon = epsilon def learn(self, s0, a0, r1, s1): q_obs_est = r1 + self.gamma * max(self.Q[s1]) q_old = self.Q[s0][a0] self.Q[s0][a0] += self.alpha * (q_obs_est - q_old) self.epsilon *= 0.9999 def act(self, s): if self.epsilon > 0.000001: action = floor(np.random.rand() / (self.epsilon / self.nA)) if action >= self.nA: # Choose greedily action = np.argmax(self.Q[s]) else: action = np.argmax(self.Q[s]) return action ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import numpy as np from math import floor def randargmax(nparray): return np.random.choice(np.flatnonzero(nparray == nparray.max())) def expected_sarsa(env, num_episodes, alpha, gamma=1.0): epsilon = 0.1 policy = ExpectedSarsa(env.nA, alpha=alpha, gamma=gamma, epsilon=epsilon) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() run_episode(env, policy) return policy.Q def run_episode(env, policy): state = env.reset() done = False while not done: action = policy.act(state) next_state, reward, done, info = env.step(action) policy.learn(state, action, reward, next_state, done) state = next_state class ExpectedSarsa(object): def __init__(self, nA, alpha, gamma, epsilon): # initialize action-value function (empty dictionary of arrays) self.Q = defaultdict(lambda: np.zeros(env.nA)) self.nA = env.nA self.alpha = alpha self.gamma = gamma self.epsilon = epsilon def learn(self, s0, a0, r1, s1, done): p = np.full(self.nA, self.epsilon / self.nA) p[np.argmax(self.Q[s1])] += 1 - self.epsilon q_obs_est = r1 + self.gamma * np.sum(p * self.Q[s1]) q_old = self.Q[s0][a0] self.Q[s0][a0] += self.alpha * (q_obs_est - q_old) self.epsilon *= 0.99 def act(self, s): if self.epsilon > 0.000001: action = floor(np.random.rand() / (self.epsilon / self.nA)) if action >= self.nA: # Choose greedily action = np.argmax(self.Q[s]) else: action = np.argmax(self.Q[s]) return action ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output _____no_output_____ ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q(Qsa, Qsa_next, reward, alpha, gamma): """ updates the action-value function estimate using the most recent time step """ return Qsa + (alpha * (reward + (gamma * Qsa_next) - Qsa)) def epsilon_greedy_probs(env, Q_s, i_episode, eps=None): """ obtains the action probabilities corresponding to epsilon-greedy policy """ epsilon = 1.0 / i_episode if eps is not None: epsilon = eps policy_s = np.ones(env.nA) * epsilon / env.nA policy_s[np.argmax(Q_s)] = 1 - epsilon + (epsilon / env.nA) return policy_s def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # begin an episode, observe S state = env.reset() # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[state], i_episode) # pick action A action = np.random.choice(np.arange(env.nA), p=policy_s) # limit number of time steps per episode for t_step in np.arange(300): # take action A, observe R, S' next_state, reward, done, info = env.step(action) # add reward to score score += reward if not done: # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[next_state], i_episode) # pick next action A' next_action = np.random.choice(np.arange(env.nA), p=policy_s) # update TD estimate of Q Q[state][action] = update_Q(Q[state][action], Q[next_state][next_action], reward, alpha, gamma) # S <- S' state = next_state # A <- A' action = next_action if done: # update TD estimate of Q Q[state][action] = update_Q(Q[state][action], 0, reward, alpha, gamma) # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output _____no_output_____ ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # begin an episode, observe S state = env.reset() while True: # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[state], i_episode) # pick next action A action = np.random.choice(np.arange(env.nA), p=policy_s) # take action A, observe R, S' next_state, reward, done, info = env.step(action) # add reward to score score += reward # update Q Q[state][action] = update_Q(Q[state][action], np.max(Q[next_state]), \ reward, alpha, gamma) # S <- S' state = next_state # until S is terminal if done: # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # begin an episode state = env.reset() # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[state], i_episode, 0.005) while True: # pick next action action = np.random.choice(np.arange(env.nA), p=policy_s) # take action A, observe R, S' next_state, reward, done, info = env.step(action) # add reward to score score += reward # get epsilon-greedy action probabilities (for S') policy_s = epsilon_greedy_probs(env, Q[next_state], i_episode, 0.005) # update Q Q[state][action] = update_Q(Q[state][action], np.dot(Q[next_state], policy_s), \ reward, alpha, gamma) # S <- S' state = next_state # until S is terminal if done: # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline # import check_test from plot_utils import plot_values import unittest from IPython.display import Markdown, display import numpy as np def printmd(string): display(Markdown(string)) V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 pol_opt = np.hstack((np.ones(11), 2, 0)) V_true = np.zeros((4,12)) for i in range(3): V_true[0:13][i] = -np.arange(3, 15)[::-1] - i V_true[1][11] = -2 V_true[2][11] = -1 V_true[3][0] = -17 def get_long_path(V): return np.array(np.hstack((V[0:13][0], V[1][0], V[1][11], V[2][0], V[2][11], V[3][0], V[3][11]))) def get_optimal_path(policy): return np.array(np.hstack((policy[2][:], policy[3][0]))) class Tests(unittest.TestCase): def td_prediction_check(self, V): to_check = get_long_path(V) soln = get_long_path(V_true) np.testing.assert_array_almost_equal(soln, to_check) def td_control_check(self, policy): to_check = get_optimal_path(policy) np.testing.assert_equal(pol_opt, to_check) def run_check(check_name, func): try: getattr(check, check_name)(func) except check.failureException as e: printmd('**<span style="color: red;">PLEASE TRY AGAIN</span>**') return printmd('**<span style="color: green;">PASSED</span>**') if __name__ == "__main__": check = Tests() ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code #actions just go up down left right print(env.action_space) #the map print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output /home/rm/.local/lib/python3.5/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead. warn_deprecated("2.2", "Passing one of 'on', 'true', 'off', 'false' as a " ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value # def epsilon_greedy(Q, state, epsilon, nA = 4): # seed = np.random.random(1)[0] # if seed < epsilon: # #random # # print("random") # action = np.random.choice(np.arange(nA), p=np.ones(nA)/nA) # else: # #greedy # # print("greedy") # action = np.argmax(Q[state]) # return int(action) def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes epsilon = eps_start for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() #set epsilon decay, v important epsilon = 1.0 / i_episode #choose action from epsilon greedy, start is 36, terminal 47 action = epsilon_greedy(Q, state, env.nA, epsilon) while True: next_state, reward, done, info = env.step(action) next_action = epsilon_greedy(Q, next_state, env.nA, epsilon) #update Q Qsa_next = Q[next_state][next_action] if not done else 0 Q[state][action] = (1-alpha)*(Q[state][action]) + \ alpha*(reward + (gamma * Qsa_next)) state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) epsilon = eps_start # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() #set epsilon decay epsilon = max(epsilon*eps_decay, eps_min) #choose action from epsilon greedy, start is 36, terminal 47 action = epsilon_greedy(Q, state, epsilon) while True: #print(action) next_state, reward, done, info = env.step(action) next_action = epsilon_greedy(Q, state, epsilon) #update Q Q[state][action] = (1-alpha)*Q[state][action] + \ alpha*(reward+gamma*np.amax(Q[next_state])) state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes epsilon = eps_start for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function state = env.reset() #set epsilon decay #epsilon = max(epsilon*eps_decay, eps_min) #epsilon = 1.0/i_episode epsilon = 0.005 #choose action from epsilon greedy, start is 36, terminal 47 action = epsilon_greedy(Q, state, env.nA, epsilon) while True: #print(action) next_state, reward, done, info = env.step(action) next_action = epsilon_greedy(Q, next_state, env.nA, epsilon) #update Q Qsa_next = (1-epsilon)*np.amax(Q[next_state]) + epsilon*np.mean(Q[next_state]) if not done else 0 Q[state][action] = (1-alpha)*Q[state][action] + \ alpha*(reward+gamma*Qsa_next) state = next_state action = next_action if done: break return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 1000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 1000/1000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def eps_greedy_act(p_q_state, p_env, p_eps): greed = np.random.choice(np.arange(2), p=[p_eps, 1-p_eps]) if greed: action = np.argmax(p_q_state) else: action = np.random.randint(0, p_env.action_space.n-1) return action def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor # loop over episodes for i_episode in range(1, num_episodes + 1): # monitor progress if i_episode % 20 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # plot the estimated optimal state-value function #V_sarsa = ([np.max(Q[key]) if key in Q else 0 for key in np.arange(48)]) #plot_values(V_sarsa) s0 = env.reset() a0 = eps_greedy_act(Q[s0], env, 1.0/i_episode) for i in range(1000): [s1, r, done, info] = env.step(a0) if not done: a1 = eps_greedy_act(Q[s1], env, 1.0/i_episode) Q[s0][a0] = (1 - alpha) * Q[s0][a0] + alpha * (r + gamma * Q[s1][a1]) else: Q[s0][a0] = (1 - alpha) * Q[s0][a0] + alpha * r break a0 = a1 s0 = s1 return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes + 1): # monitor progress if i_episode % 20 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # plot the estimated optimal state-value function #V_q = ([np.max(Q[key]) if key in Q else 0 for key in np.arange(48)]) #plot_values(V_q) s0 = env.reset() for i in range(1000): a = eps_greedy_act(Q[s0], env, 1.0 / i_episode) [s1, r, done, info]=env.step(a) if not done: a_max = eps_greedy_act(Q[s1],env, 0) Q[s0][a] = (1 - alpha) * Q[s0][a] + alpha * (r + gamma * Q[s1][a_max]) else: Q[s0][a] = (1 - alpha) * Q[s0][a] + alpha * r break s0 = s1 return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def eps_greedy_p(p_q_state, p_env, p_eps): p= np.ones(p_env.action_space.n)*p_eps/p_env.action_space.n p[np.argmax(p_q_state)] += 1-p_eps return p def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # loop over episodes for i_episode in range(1, num_episodes + 1): # monitor progress if i_episode % 20 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # plot the estimated optimal state-value function #V_q = ([np.max(Q[key]) if key in Q else 0 for key in np.arange(48)]) #plot_values(V_q) s0 = env.reset() for i in range(1000): a = eps_greedy_act(Q[s0], env, 1.0 / i_episode) [s1, r, done, info]=env.step(a) if not done: p = eps_greedy_p(Q[s1],env, 1.0 / i_episode) Q[s0][a] = (1 - alpha) * Q[s0][a] + alpha * (r + gamma * np.sum(Q[s1]*p)) else: Q[s0][a] = (1 - alpha) * Q[s0][a] + alpha * r break s0 = s1 return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 1000, 0.05) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 1000/1000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random import math from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state # S <- S' action = next_action # A <- A' if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code env.step(0) # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 1000, .02) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 1000/1000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 # value of next state target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Q-Learning - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): learning rate gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 1000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 1000/1000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsaexpected(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) policy_s = np.ones(nA) * eps / nA # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Expected SARSA - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): step-size parameters for the update step gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 0.005 # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score # update Q Q[state][action] = update_Q_sarsaexpected(alpha, gamma, nA, eps, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q, ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 1000, 0.9) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output _____no_output_____ ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function. ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code import random def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): current = Q[state][action] Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) new_value = current + (alpha * (target - current)) return new_value def epsilon_greedy(Q, state, nA, eps): if random.random() > eps: return np.argmax(Q[state]) else: return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0 state = env.reset() eps = 1.0 / i_episode action = epsilon_greedy(Q, state, nA, eps) while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state action = next_action if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) plt.plot(np.linspace(0, num_episodes, len(avg_scores), endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): Q_current = Q[state][action] Q_next = np.max(Q[next_state]) if next_state is not None else 0 Qsa = Q_current + alpha * (reward + gamma * Q_next - Q_current) return Qsa def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0.0 state = env.reset() eps = 1.0 / i_episode while True: action = epsilon_greedy(Q, state, nA, eps) next_state, reward, done, info = env.step(action) score += reward Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ state, action, reward, next_state) state = next_state if done: tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): Q_current = Q[state][action] policy_s = np.ones(nA) * eps / nA policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) expected_Q_next = np.dot(Q[next_state], policy_s) Q_next = Q_current + alpha * (reward + gamma * expected_Q_next - Q_current) return Q_next def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function score = 0.0 state = env.reset() eps = 0.005 while True: action = epsilon_greedy(Q, state, nA, eps) next_state, reward, done, info = env.step(action) score += reward Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \ state, action, reward, next_state) state = next_state if done: tmp_scores.append(score) break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def generate_episode_from_Q(env, Q, epsilon, nA, alpha, gamma): """ generates an episode from following the epsilon-greedy policy """ episode = [] state = env.reset() #print(state) action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() #print(get_probs(Q[state], epsilon, nA)) while True: next_state, next_reward, done, info = env.step(action) if done: #Q = update_Q(state, action, next_state, next_action, next_reward,Q, alpha, gamma,True) Q[state][action] = update_Qsa(Q[state][action],0,next_reward,alpha, gamma) break next_action = np.random.choice(np.arange(nA), p=get_probs(Q[next_state], epsilon, nA)) \ if state in Q else env.action_space.sample() Q[state][action] = update_Qsa(Q[state][action],Q[next_state][next_action],next_reward,alpha, gamma) state = next_state action = next_action return Q def get_probs(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s def update_Qsa(Qsa,Q_next_sa, next_reward, alpha, gamma): """ updates the action-value function estimate using the most recent episode """ old_Q = Qsa Qsa = old_Q + alpha*(next_reward + gamma*Q_next_sa - old_Q) return Qsa def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.9, eps_min=0.00005): nA = env.action_space.n # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor # loop over episodes epsilon = eps_start for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 50 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # set the value of epsilon #epsilon = epsilon*eps_decay epsilon = max(epsilon*eps_decay, eps_min) #epsilon = 1.0 / i_episode #epsilon = max(epsilon, eps_min) # generate an episode by following epsilon-greedy policy Q = generate_episode_from_Q(env, Q, epsilon, nA, alpha, gamma) # update the action-value function estimate using the episode #Q = update_Q(env, episode, Q, alpha, gamma) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 3000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 3000/3000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def generate_episode_from_Q(env, Q, epsilon, nA, alpha, gamma): """ generates an episode from following the epsilon-greedy policy """ episode = [] state = env.reset() #print(state) #print(get_probs(Q[state], epsilon, nA)) while True: action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() next_state, next_reward, done, info = env.step(action) if done: #Q = update_Q(state, action, next_state, next_action, next_reward,Q, alpha, gamma,True) Q[state][action] = update_Qsa(Q[state][action],0,next_reward,alpha, gamma) break # next action for evaluating using greedy policy best_next_a = np.argmax(Q[next_state]) Q[state][action] = update_Qsa(Q[state][action],Q[next_state][best_next_a],next_reward,alpha, gamma) state = next_state return Q def get_probs(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s def update_Qsa(Qsa,Q_next_sa, next_reward, alpha, gamma): """ updates the action-value function estimate using the most recent episode """ old_Q = Qsa Qsa = old_Q + alpha*(next_reward + gamma*Q_next_sa - old_Q) return Qsa def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.9, eps_min=0.00005): nA = env.action_space.n # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor # loop over episodes epsilon = eps_start for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 50 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # set the value of epsilon #epsilon = epsilon*eps_decay epsilon = max(epsilon*eps_decay, eps_min) #epsilon = 1.0 / i_episode #epsilon = max(epsilon, eps_min) # generate an episode by following epsilon-greedy policy Q = generate_episode_from_Q(env, Q, epsilon, nA, alpha, gamma) # update the action-value function estimate using the episode #Q = update_Q(env, episode, Q, alpha, gamma) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 3000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 3000/3000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def generate_episode_from_Q(env, Q, epsilon, nA, alpha, gamma): """ generates an episode from following the epsilon-greedy policy """ episode = [] state = env.reset() #print(state) while True: action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \ if state in Q else env.action_space.sample() next_state, next_reward, done, info = env.step(action) if done: #Q = update_Q(state, action, next_state, next_action, next_reward,Q, alpha, gamma,True) Q[state][action] = update_Qsa(Q[state][action],0,next_reward,alpha, gamma) break # Q_next_sa using expected value p=get_probs(Q[next_state], epsilon, nA) #print(p) Q_next_sa_expected = np.dot(Q[next_state],p) Q[state][action] = update_Qsa(Q[state][action],Q_next_sa_expected,next_reward,alpha, gamma) state = next_state return Q def get_probs(Q_s, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * epsilon / nA best_a = np.argmax(Q_s) policy_s[best_a] = 1 - epsilon + (epsilon / nA) return policy_s def update_Qsa(Qsa,Q_next_sa, next_reward, alpha, gamma): """ updates the action-value function estimate using the most recent episode """ old_Q = Qsa Qsa = old_Q + alpha*(next_reward + gamma*Q_next_sa - old_Q) return Qsa def expected_sarsa(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.9, eps_min=0.00005): nA = env.action_space.n # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor # loop over episodes epsilon = eps_start for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 50 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function # set the value of epsilon #epsilon = epsilon*eps_decay epsilon = max(epsilon*eps_decay, eps_min) #epsilon = 1.0 / i_episode #epsilon = max(epsilon, eps_min) # generate an episode by following epsilon-greedy policy Q = generate_episode_from_Q(env, Q, epsilon, nA, alpha, gamma) # update the action-value function estimate using the episode #Q = update_Q(env, episode, Q, alpha, gamma) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 5000, 0.01) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline plt.rcParamsDefault['figure.facecolor'] = 'w' import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0] = np.arange(-14, -2) V_opt[1] = np.arange(-13, -1) V_opt[2] = np.arange(-12, 0) V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def sarsa(env, num_episodes, alpha, epsilon=0.1, gamma=1.0): assert 0 <= gamma <= 1 if not callable(alpha): alpha = (lambda alpha_val: lambda i_episode: alpha_val)(alpha) if not callable(epsilon): assert 0 <= epsilon <= 1 epsilon = (lambda epsilon_val: lambda i_episode: epsilon_val)(epsilon) nA = env.nA # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor # loop over episodes cumulative_reward = 0 update_every = 100 for i_episode in range(1, num_episodes+1): # get alpha and epsilon for this episode alpha_val = alpha(i_episode) epsilon_val = epsilon(i_episode) # monitor progress if i_episode % update_every == 0: print("\rEpisode {}/{} (alpha={:.4f}, epsilon={:.4f}, gamma={:.4f}, temporal_average_reward={:.4f})" .format(i_episode, num_episodes, alpha_val, epsilon_val, gamma, cumulative_reward / update_every), end="") sys.stdout.flush() cumulative_reward = 0 ## TODO: complete the function state = env.reset() probs = np.full(nA, epsilon_val / nA) probs[Q[state].argmax()] += 1 - epsilon_val action = np.random.choice(nA, p=probs) while True: last_state, last_action = state, action state, reward, done, info = env.step(action) cumulative_reward += reward if done: Q[last_state][last_action] += alpha_val * (reward - Q[last_state][last_action]) break else: probs = np.full(nA, epsilon(i_episode) / nA) probs[Q[state].argmax()] += 1 - epsilon(i_episode) action = np.random.choice(nA, p=probs) Q[last_state][last_action] += alpha_val * (reward + gamma * Q[state][action] - Q[last_state][last_action]) return Q def linearly_decaying_epsilon(num_decaying_episodes, initial_epsilon=1.0, min_epsilon=0.1): decay_rate = (min_epsilon - initial_epsilon) / num_decaying_episodes def epsilon_func(i_episode): if i_episode > num_decaying_episodes: return min_epsilon return initial_epsilon + (i_episode - 1) * decay_rate return epsilon_func ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function num_episodes = 5000 Q_sarsa = sarsa(env=env, num_episodes=num_episodes, alpha=0.01, epsilon=linearly_decaying_epsilon(num_decaying_episodes=int(num_episodes * 0.8), initial_epsilon=0.1, min_epsilon=0.), gamma=1.0) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 (alpha=0.0100, epsilon=0.0000, gamma=1.0000, temporal_average_reward=-13.0000) ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def q_learning(env, num_episodes, alpha, epsilon=0.1, gamma=1.0): assert 0 <= gamma <= 1 if not callable(alpha): alpha = (lambda alpha_val: lambda i_episode: alpha_val)(alpha) if not callable(epsilon): assert 0 <= epsilon <= 1 epsilon = (lambda epsilon_val: lambda i_episode: epsilon_val)(epsilon) nA = env.nA # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor # loop over episodes cumulative_reward = 0 update_every = 100 for i_episode in range(1, num_episodes+1): # get alpha and epsilon for this episode alpha_val = alpha(i_episode) epsilon_val = epsilon(i_episode) # monitor progress if i_episode % update_every == 0: print("\rEpisode {}/{} (alpha={:.4f}, epsilon={:.4f}, gamma={:.4f}, temporal_average_reward={:.4f})" .format(i_episode, num_episodes, alpha_val, epsilon_val, gamma, cumulative_reward / update_every), end="") sys.stdout.flush() cumulative_reward = 0 state = env.reset() probs = np.full(nA, epsilon_val / nA) probs[Q[state].argmax()] += 1 - epsilon_val action = np.random.choice(nA, p=probs) while True: last_state, last_action = state, action state, reward, done, info = env.step(action) cumulative_reward += reward if done: Q[last_state][last_action] += alpha_val * (reward - Q[last_state][last_action]) break else: probs = np.full(nA, epsilon(i_episode) / nA) max_action = Q[state].argmax() probs[max_action] += 1 - epsilon(i_episode) action = np.random.choice(nA, p=probs) Q[last_state][last_action] += alpha_val * (reward + gamma * Q[state][max_action] - Q[last_state][last_action]) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function num_episodes = 6000 Q_sarsamax = q_learning(env=env, num_episodes=num_episodes, alpha=0.01, epsilon=linearly_decaying_epsilon(num_decaying_episodes=int(num_episodes * 0.9), initial_epsilon=0.1, min_epsilon=0.01), gamma=1.0) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 6000/6000 (alpha=0.0100, epsilon=0.0100, gamma=1.0000, temporal_average_reward=-17.2300) ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def expected_sarsa(env, num_episodes, alpha, epsilon=0.1, gamma=1.0): assert 0 <= gamma <= 1 if not callable(alpha): alpha = (lambda alpha_val: lambda i_episode: alpha_val)(alpha) if not callable(epsilon): assert 0 <= epsilon <= 1 epsilon = (lambda epsilon_val: lambda i_episode: epsilon_val)(epsilon) nA = env.nA # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor # loop over episodes cumulative_reward = 0 update_every = 100 for i_episode in range(1, num_episodes+1): # get alpha and epsilon for this episode alpha_val = alpha(i_episode) epsilon_val = epsilon(i_episode) # monitor progress if i_episode % update_every == 0: print("\rEpisode {}/{} (alpha={:.4f}, epsilon={:.4f}, gamma={:.4f}, temporal_average_reward={:.4f})" .format(i_episode, num_episodes, alpha_val, epsilon_val, gamma, cumulative_reward / update_every), end="") sys.stdout.flush() cumulative_reward = 0 state = env.reset() probs = np.full(nA, epsilon_val / nA) probs[Q[state].argmax()] += 1 - epsilon_val action = np.random.choice(nA, p=probs) while True: last_state, last_action = state, action state, reward, done, info = env.step(action) cumulative_reward += reward if done: Q[last_state][last_action] += alpha_val * (reward - Q[last_state][last_action]) break else: probs = np.full(nA, epsilon(i_episode) / nA) probs[Q[state].argmax()] += 1 - epsilon(i_episode) action = np.random.choice(nA, p=probs) Q[last_state][last_action] += alpha_val * (reward + gamma * np.sum(Q[state] * probs) - Q[last_state][last_action]) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function num_episodes = 5000 Q_expsarsa = expected_sarsa(env=env, num_episodes=num_episodes, alpha=0.01, epsilon=linearly_decaying_epsilon(num_decaying_episodes=int(num_episodes * 0.9), initial_epsilon=0.1, min_epsilon=0.), gamma=1.0) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 (alpha=0.0100, epsilon=0.0000, gamma=1.0000, temporal_average_reward=-13.0000) ###Markdown Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages. ###Code import sys import gym import numpy as np import random import math from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values ###Output _____no_output_____ ###Markdown Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._ ###Code # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) # get value of state, action pair at next time step Qsa_next = Q[next_state][next_action] if next_state is not None else 0 target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def epsilon_greedy(Q, state, nA, eps): """Selects epsilon-greedy action for supplied state. Params ====== Q (dictionary): action-value function state (int): current state nA (int): number actions in the environment eps (float): epsilon """ if random.random() > eps: # select greedy action with probability epsilon return np.argmax(Q[state]) else: # otherwise, select an action randomly return random.choice(np.arange(env.action_space.n)) def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection while True: next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score if not done: next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward, next_state, next_action) state = next_state # S <- S' action = next_action # A <- A' if done: Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \ state, action, reward) tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 # value of next state target = reward + (gamma * Qsa_next) # construct TD target new_value = current + (alpha * (target - current)) # get updated value return new_value def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Q-Learning - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): learning rate gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 1.0 / i_episode # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._) ###Code def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None): """Returns updated Q-value for the most recent experience.""" current = Q[state][action] # estimate in Q-table (for current state, action pair) policy_s = np.ones(nA) * eps / nA # current policy (for next state S') policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step target = reward + (gamma * Qsa_next) # construct target new_value = current + (alpha * (target - current)) # get updated value return new_value def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100): """Expected SARSA - TD Control Params ====== num_episodes (int): number of episodes to run the algorithm alpha (float): step-size parameters for the update step gamma (float): discount factor plot_every (int): number of episodes to use when calculating average score """ nA = env.action_space.n # number of actions Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays # monitor performance tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() score = 0 # initialize score state = env.reset() # start episode eps = 0.005 # set value of epsilon while True: action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection next_state, reward, done, info = env.step(action) # take action A, observe R, S' score += reward # add reward to agent's score # update Q Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \ state, action, reward, next_state) state = next_state # S <- S' if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000
CESM2_COSP/taylor_plots/newtaylor.ipynb
###Markdown Make new taylor plots Here I am using the original (2012) observations, but adding CAM6 Verify my methods by reproducing Figure 7 from Kay 2012 using the stored data from Ben Hillman. Function and package imports ###Code import sys # Add common resources folder to path sys.path.append('/glade/u/home/jonahshaw/Scripts/git_repos/CESM2_analysis') sys.path.append('/glade/u/home/jonahshaw/Scripts/git_repos/CESM2_analysis/Common/') # sys.path.append("/home/jonahks/git_repos/netcdf_analysis/Common/") from imports import ( pd, np, xr, mpl, plt, sns, os, datetime, sys, crt, gridspec, ccrs, metrics, Iterable, cmaps, mpl,glob ) from functions import ( masked_average, add_weights, sp_map, season_mean, get_dpm, leap_year, share_ylims, to_png ) from cloud_metric import Cloud_Metric from collections import deque %matplotlib inline ###Output The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload ###Markdown Taylor plot specific imports ###Code import taylor_jshaw as taylor import matplotlib as matplotlib import matplotlib.patches as patches from interp_functions import * from functions import calculate # def calculate(cntl,test): # """ # Calculate Taylor statistics for making taylor diagrams. # Works with masked array if masked with NaNs. # """ # _cntl = add_weights(cntl) # mask = np.bitwise_or(xr.ufuncs.isnan(cntl),xr.ufuncs.isnan(test)) # mask means hide # # mask = np.bitwise_or(cntl == np.nan,test == np.nan) # mask means hide # wgt = np.array(_cntl['cell_weight']) # # wgt = wgt * mask # does this work since one or zero? # wgt = np.where(~mask,wgt,np.nan) # erroring # # calculate sums and means # # These weights are not masked, so their sum is too high. # sumwgt = np.nansum(wgt) # this is probably where the error is. # meantest = np.nansum(wgt*test)/sumwgt # meancntl = np.nansum(wgt*cntl)/sumwgt # # calculate variances # stdtest = (np.nansum(wgt*(test-meantest)**2.0)/sumwgt)**0.5 # stdcntl = (np.nansum(wgt*(cntl-meancntl)**2.0)/sumwgt)**0.5 # # calculate correlation coefficient # ccnum = np.nansum(wgt*(test-meantest)*(cntl-meancntl)) # ccdem = sumwgt*stdtest*stdcntl # corr = ccnum/ccdem # # calculate variance ratio # ratio = stdtest/stdcntl # # calculate normalized bias # bias = (meantest - meancntl)/np.abs(meancntl) # # Calculate the absolute bias # bias_abs = meantest - meancntl # # calculate centered pattern RMS difference # try: # rmssum = np.nansum(wgt*((test-meantest)-(cntl-meancntl))**2.0) # except: # print('test: ',test.shape) # print('meantest: ',meantest.shape) # print('cntl: ',cntl.shape) # print('meancntl: ',meancntl.shape) # print(((test-meantest)-(cntl-meancntl)).shape) # print(((test-meantest)-(cntl-meancntl)).lat) # print(((test-meantest)-(cntl-meancntl)).lon) # rmserr = (rmssum/sumwgt)**0.5 # rmsnorm = rmserr/stdcntl # # return corr,ratio,bias,rmsnorm # return bias,corr,rmsnorm,ratio,bias_abs ###Output _____no_output_____ ###Markdown Label appropriate directories ###Code # original observations from the Kay 2012 paper og_dir = '/glade/u/home/jonahshaw/w/kay2012_OGfiles' # where to save processed files save_dir = '/glade/u/home/jonahshaw/w/archive/taylor_files/' # CAM4 and CAM5 model runs oldcase_dir = '/glade/u/home/jonahshaw/w/archive/Kay_COSP_2012/' # CAM6 model runs newcase_dir = '/glade/p/cesm/pcwg/jenkay/COSP/cesm21/' case_dirs = [oldcase_dir,oldcase_dir,newcase_dir] # cases = [ # '%s%s' % (oldcase_dir,'cam4_1deg_release_amip'), # '%s%s' % (oldcase_dir,'cam5_1deg_release_amip'), # '%s%s' % (newcase_dir,'f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1') # ] cases = [ 'cam4_1deg_release_amip', 'cam5_1deg_release_amip', 'f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1' ] cosp_v = [2,2,1] # cosp version (guess) # Time ranges to select by: time_range1 = ['2001-01-01', '2010-12-31'] time_range2 = ['0001-01-01', '0010-12-31'] def get_file(var_name,case,suffix=''): return glob.glob('%s/%s/*%s*.nc' % (case,suffix,var_name)) def fix_cam_time(ds): try: ds['time'] = ds['time_bnds'].isel(bnds=0) except: ds['time'] = ds['time_bnds'].isel(nbnd=0) return ds def select_AMIP(ds): if ds['time.year'][0] > 1000: # bad way to discriminate by year format _ds = ds.sel(time=slice('2001-01-01', '2010-12-31')) # work for the AMIP 0000-0010 else: _ds = ds.sel(time=slice('0001-01-01', '0010-12-31')) # work for the AMIP 0000-0010 return _ds ###Output _____no_output_____ ###Markdown Panel 1 (CERES-EBAF LWCF and SWCF) ###Code # Variables of interest _vars = ['SWCF','LWCF'] ###Output _____no_output_____ ###Markdown Open observation files ###Code og_swcf = xr.open_dataset('%s/CERES-EBAF.SWCF.nc' % (og_dir)) og_lwcf = xr.open_dataset('%s/CERES-EBAF.LWCF.nc' % (og_dir)) og_swcf = og_swcf.rename({'SWCFTOA':'SWCF'}) og_lwcf = og_lwcf.rename({'LWCFTOA':'LWCF'}) ###Output _____no_output_____ ###Markdown Open model files ###Code cntlnames = { 'SWCF': og_swcf['SWCF'], # these have to be dataarrays, not datasets 'LWCF': og_lwcf['LWCF'], } cntlnames = { 'SWCF': og_swcf['SWCF'], # these have to be dataarrays, not datasets 'LWCF': og_lwcf['LWCF'], } _vars = ['SWCF','LWCF'] suffix = 'atm/proc/tseries/month_1' model_das = {} for j in _vars: var_files = [] for i,ii in zip(case_dirs,cases): _f = glob.glob('%s/%s/%s/*%s*.nc' % (i,ii,suffix,j)) # get the correct file # open dataset _ds = xr.open_dataset(_f[0]) print(_f[0]) # apply time bounds _ds = fix_cam_time(_ds) # select the AMIP period _ds = select_AMIP(_ds) # Fix any weird month/year mismatch by weighting months equally. _da = _ds[j].groupby('time.month').mean('time').mean('month') # Interpolate to the control (observation) grid _da = _da.interp_like(cntlnames[j],method='nearest') # _da = _da.interp_like(cntlnames[j]) var_files.append(_da) # print(_f) model_das[j] = var_files ###Output /glade/u/home/jonahshaw/w/archive/Kay_COSP_2012//cam4_1deg_release_amip/atm/proc/tseries/month_1/cam4_1deg_release_amip.cam.h0.SWCF.200011-201012.nc /glade/u/home/jonahshaw/w/archive/Kay_COSP_2012//cam5_1deg_release_amip/atm/proc/tseries/month_1/cam5_1deg_release_amip.cam.h0.SWCF.200101-201012.nc /glade/p/cesm/pcwg/jenkay/COSP/cesm21//f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1/atm/proc/tseries/month_1/f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.SWCF.197901-201412.nc /glade/u/home/jonahshaw/w/archive/Kay_COSP_2012//cam4_1deg_release_amip/atm/proc/tseries/month_1/cam4_1deg_release_amip.cam.h0.LWCF.200011-201012.nc /glade/u/home/jonahshaw/w/archive/Kay_COSP_2012//cam5_1deg_release_amip/atm/proc/tseries/month_1/cam5_1deg_release_amip.cam.h0.LWCF.200101-201012.nc /glade/p/cesm/pcwg/jenkay/COSP/cesm21//f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1/atm/proc/tseries/month_1/f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1.cam.h0.LWCF.197901-201412.nc ###Markdown Set-up ###Code # Control names dictionary (i.e. observations) cntlnames = { 'SWCF': og_swcf['SWCF'], # these have to be dataarrays, not datasets 'LWCF': og_lwcf['LWCF'], } # Case names testnames = ('CAM4','CAM5','CAM6') testmetrics = model_das ###Output _____no_output_____ ###Markdown Calculate ###Code varnames = ['SWCF','LWCF'] nvars = 2; ntest = 3; cc = np.zeros([nvars,ntest]) ratio = np.zeros([nvars,ntest]) bias = np.zeros([nvars,ntest]) for ivar,var in enumerate(varnames): # iterate over the variables for a specific Taylor plot # Select observational dataarray: obs_da = cntlnames[var] obs_ds = obs_da for itest,(name,metric) in enumerate(zip(testnames,testmetrics[var])): # iterate over the models to test/plot # Time average: test_ds = metric #[var] # Calculate Taylor diagram relevant variables: _bias,_corr,_rmsnorm,_ratio,_bias_abs = calculate(obs_ds,test_ds) # print(_bias[var],_corr[var],_rmsnorm[var],_ratio[var]) cc[ivar,itest] = _corr ratio[ivar,itest] = _ratio bias[ivar,itest] = _bias # print(bias,corr,rmsnorm,ratio) ###Output _____no_output_____ ###Markdown Plot ###Code mpl.rcParams['font.family'] = 'serif' mpl.rcParams['font.size'] = 10 mpl.rcParams['text.usetex'] = True figure = plt.figure(figsize=(8,8)) figure.set_dpi(200) testcolors = ('SkyBlue','Firebrick','#f6d921') ax = figure.add_subplot(2,2,1,frameon=False) taylor_diagram = taylor.Taylor_diagram( ax,cc,ratio,bias, casecolors=testcolors, varlabels=range(1,len(varnames)+1), ) # Reference bias bubbles, wut is this? ref_bias = 0.1 # This is a 10% bias reference bubble in the lower-left corner yloc = 0.05*taylor_diagram.xymax + ref_bias/2.0 xloc = 0.05*taylor_diagram.xymax + ref_bias/2.0 circle = patches.Circle( (xloc,yloc),ref_bias/2.0, color="black", alpha=0.30, ) ax.add_patch(circle) # Reference bias bubble points - centered at the reference bubble circle = patches.Circle( (xloc,yloc),0.01, color="black", ) ax.add_patch(circle) # Reference bias text ax.text( xloc+ref_bias/2.0 + 0.01*taylor_diagram.xymax,yloc, "%.0f%s bias"%(ref_bias*100,r"\%"), color="Black", fontsize=8, horizontalalignment="left", verticalalignment="center" ) # Case labels xloc = taylor_diagram.xymax*0.95 yloc = taylor_diagram.xymax*0.05 dy = taylor_diagram.xymax*0.05 for itest,testname in enumerate(testnames[::-1]): ax.text( xloc,yloc+itest*dy, # place these just above the dots testname, color=testcolors[::-1][itest], fontsize=11, horizontalalignment="right", verticalalignment="bottom", # fontweight='bold', # doesn't do anything ) mpl.rcParams['text.usetex'] = False ###Output _____no_output_____ ###Markdown Panel 2 (ISCCP, MISR, and CALIPSO total cloud) Open files ###Code og_clt_isccp = xr.open_dataset('%s/ISCCP.CLDTOT_ISCCPCOSP.nc' % (og_dir)) og_clt_misr = xr.open_dataset('%s/MISR.CLDTOT_MISR.nc' % (og_dir)) og_clt_caliop = xr.open_dataset('%s/CALIPSO.CLDTOT_CAL.nc' % (og_dir)) og_clt_isccp = og_clt_isccp.rename({'CLDTOT_ISCCPCOSP':'CLDTOT_ISCCP'}) ###Output _____no_output_____ ###Markdown Open model files ###Code # Control names dictionary (i.e. observations) cntlnames = { 'CLDTOT_ISCCP': og_clt_isccp['CLDTOT_ISCCP'], # these have to be dataarrays, not datasets 'CLDTOT_MISR': og_clt_misr['CLDTOT_MISR'].where(np.abs(og_clt_misr['lat'])<60), 'CLDTOT_CAL': og_clt_caliop['CLDTOT_CAL'], } suffixes = ['atm/proc/tseries/month_1','','atm/proc/tseries/month_1'] # paths to use if I have pr file_dirs = [save_dir, save_dir, save_dir] _vars = ['CLDTOT_ISCCP','CLDTOT_MISR','CLDTOT_CAL'] case_dirs_n = [case_dirs,file_dirs,case_dirs] # _vars = ['CLDTOT_ISCCP','CLDTOT_MISR','CLDTOT_CAL'] model_das = {} for j,suf,_dir,_pdir in zip(_vars,suffixes,case_dirs,case_dirs_n): # print(j) var_files = [] for i,ii in zip(_pdir,cases): # print(ii) # break # print(('%s/%s/%s/*.%s.*' % (i,ii,suf,j))) _f = glob.glob('%s/%s/%s/*.%s.*' % (i,ii,suf,j)) # get the correct file # open dataset print(_f[0]) _ds = xr.open_dataset(_f[0]) # Fix any weird month/year mismatch by weighting months equally. # if j == 'CLDTOT_MISR': # _tvar = 'CLD_MISR' # else: # _tvar = j try: _da = _ds[j].groupby('time.month').mean('time').mean('month') except: _da = _ds['CLD_MISR'].groupby('time.month').mean('time').mean('month') # print(_f[0],_ds) # break # Interpolate to the control (observation) grid _da = _da.interp_like(cntlnames[j],method='nearest') # _da = _da.interp_like(cntlnames[j]) var_files.append(_da) model_das[j] = var_files ###Output /glade/u/home/jonahshaw/w/archive/Kay_COSP_2012//cam4_1deg_release_amip/atm/proc/tseries/month_1/cam4_1deg_release_amip.cam.h0.CLDTOT_ISCCP.200011-201012.nc ###Markdown Calculate These are not being masked correctly. Should just use values below 60degrees. ###Code # Case names testnames = ('CAM4','CAM5','CAM6') testmetrics = model_das varnames = ['CLDTOT_ISCCP','CLDTOT_MISR','CLDTOT_CAL'] nvars = 3; ntest = 3; cc = np.zeros([nvars,ntest]) ratio = np.zeros([nvars,ntest]) bias = np.zeros([nvars,ntest]) for ivar,var in enumerate(varnames): # iterate over the variables for a specific Taylor plot # Select observational dataarray: obs_da = cntlnames[var] obs_ds = obs_da for itest,(name,metric) in enumerate(zip(testnames,testmetrics[var])): # iterate over the models to test/plot # Time average: test_ds = metric #[var] # Calculate Taylor diagram relevant variables: _bias,_corr,_rmsnorm,_ratio,_bias_abs = calculate(obs_ds,test_ds) # print(_bias[var],_corr[var],_rmsnorm[var],_ratio[var]) cc[ivar,itest] = _corr ratio[ivar,itest] = _ratio bias[ivar,itest] = _bias # print(bias,corr,rmsnorm,ratio) ###Output _____no_output_____ ###Markdown Plot ###Code mpl.rcParams['font.family'] = 'serif' mpl.rcParams['font.size'] = 10 mpl.rcParams['text.usetex'] = True figure = plt.figure(figsize=(8,8)) figure.set_dpi(200) testcolors = ('SkyBlue','Firebrick','#f6d921') ax = figure.add_subplot(2,2,1,frameon=False) taylor_diagram = taylor.Taylor_diagram( ax,cc,ratio,bias, casecolors=testcolors, varlabels=range(1,len(varnames)+1), ) # Reference bias bubbles, wut is this? ref_bias = 0.1 # This is a 10% bias reference bubble in the lower-left corner yloc = 0.05*taylor_diagram.xymax + ref_bias/2.0 xloc = 0.05*taylor_diagram.xymax + ref_bias/2.0 circle = patches.Circle( (xloc,yloc),ref_bias/2.0, color="black", alpha=0.30, ) ax.add_patch(circle) # Reference bias bubble points - centered at the reference bubble circle = patches.Circle( (xloc,yloc),0.01, color="black", ) ax.add_patch(circle) # Reference bias text ax.text( xloc+ref_bias/2.0 + 0.01*taylor_diagram.xymax,yloc, "%.0f%s bias"%(ref_bias*100,r"\%"), color="Black", fontsize=8, horizontalalignment="left", verticalalignment="center" ) # Case labels xloc = taylor_diagram.xymax*0.95 yloc = taylor_diagram.xymax*0.05 dy = taylor_diagram.xymax*0.05 for itest,testname in enumerate(testnames[::-1]): ax.text( xloc,yloc+itest*dy, # place these just above the dots testname, color=testcolors[::-1][itest], fontsize=11, horizontalalignment="right", verticalalignment="bottom", # fontweight='bold', # doesn't do anything ) mpl.rcParams['text.usetex'] = False ###Output _____no_output_____ ###Markdown Panel 3 (CALIPSO low- mid- and high-level cloud) Open files ###Code og_cll_caliop = xr.open_dataset('%s/CALIPSO.CLDLOW_CAL.nc' % (og_dir)) og_clm_caliop = xr.open_dataset('%s/CALIPSO.CLDMED_CAL.nc' % (og_dir)) og_clh_caliop = xr.open_dataset('%s/CALIPSO.CLDHGH_CAL.nc' % (og_dir)) ###Output _____no_output_____ ###Markdown Open model files ###Code cntlnames = { 'CLDLOW_CAL': og_cll_caliop['CLDLOW_CAL'], # these have to be dataarrays, not datasets 'CLDMED_CAL': og_clm_caliop['CLDMED_CAL'], 'CLDHGH_CAL': og_clh_caliop['CLDHGH_CAL'], } _vars = ['CLDLOW_CAL','CLDMED_CAL','CLDHGH_CAL'] suffix = 'atm/proc/tseries/month_1' model_das = {} for j in _vars: var_files = [] for i,ii in zip(case_dirs,cases): _f = glob.glob('%s/%s/%s/*.%s.*' % (i,ii,suffix,j)) # get the correct file # open dataset print(_f[0]) _ds = xr.open_dataset(_f[0]) # apply time bounds _ds = fix_cam_time(_ds) # select the AMIP period _ds = select_AMIP(_ds) # Fix any weird month/year mismatch by weighting months equally. try: _da = _ds[j].groupby('time.month').mean('time').mean('month') # _ds.close() # ? except: print(_ds) # Interpolate to the control (observation) grid _da = _da.interp_like(cntlnames[j],method='nearest') # _da = _da.interp_like(cntlnames[j]) var_files.append(_da) # print(_f) model_das[j] = var_files ###Output /glade/u/home/jonahshaw/w/archive/Kay_COSP_2012//cam4_1deg_release_amip/atm/proc/tseries/month_1/cam4_1deg_release_amip.cam.h0.CLDLOW_CAL.200011-201012.nc ###Markdown Prep and plot ###Code # Control names dictionary (i.e. observations) cntlnames = { 'CLDLOW_CAL': og_cll_caliop['CLDLOW_CAL'], 'CLDMED_CAL': og_clm_caliop['CLDMED_CAL'], 'CLDHGH_CAL': og_clh_caliop['CLDHGH_CAL'], } # Case names testnames = ('CAM4','CAM5','CAM6') testmetrics = model_das ###Output _____no_output_____ ###Markdown Calculate These are not being masked correctly. Should just use values below 60degrees. ###Code varnames = ['CLDLOW_CAL','CLDMED_CAL','CLDHGH_CAL'] nvars = 3; ntest = 3; cc = np.zeros([nvars,ntest]) ratio = np.zeros([nvars,ntest]) bias = np.zeros([nvars,ntest]) for ivar,var in enumerate(varnames): # iterate over the variables for a specific Taylor plot # Select observational dataarray: obs_da = cntlnames[var] obs_ds = obs_da for itest,(name,metric) in enumerate(zip(testnames,testmetrics[var])): # iterate over the models to test/plot # Time average: test_ds = metric #[var] # Calculate Taylor diagram relevant variables: _bias,_corr,_rmsnorm,_ratio,_bias_abs = calculate(obs_ds,test_ds) # print(_bias[var],_corr[var],_rmsnorm[var],_ratio[var]) cc[ivar,itest] = _corr ratio[ivar,itest] = _ratio bias[ivar,itest] = _bias # print(bias,corr,rmsnorm,ratio) ###Output _____no_output_____ ###Markdown Plot ###Code mpl.rcParams['font.family'] = 'serif' mpl.rcParams['font.size'] = 10 mpl.rcParams['text.usetex'] = True figure = plt.figure(figsize=(8,8)) figure.set_dpi(200) testcolors = ('SkyBlue','Firebrick','#f6d921') ax = figure.add_subplot(2,2,1,frameon=False) taylor_diagram = taylor.Taylor_diagram( ax,cc,ratio,bias, casecolors=testcolors, varlabels=range(1,len(varnames)+1), ) # Reference bias bubbles, wut is this? ref_bias = 0.1 # This is a 10% bias reference bubble in the lower-left corner yloc = 0.05*taylor_diagram.xymax + ref_bias/2.0 xloc = 0.05*taylor_diagram.xymax + ref_bias/2.0 circle = patches.Circle( (xloc,yloc),ref_bias/2.0, color="black", alpha=0.30, ) ax.add_patch(circle) # Reference bias bubble points - centered at the reference bubble circle = patches.Circle( (xloc,yloc),0.01, color="black", ) ax.add_patch(circle) # Reference bias text ax.text( xloc+ref_bias/2.0 + 0.01*taylor_diagram.xymax,yloc, "%.0f%s bias"%(ref_bias*100,r"\%"), color="Black", fontsize=8, horizontalalignment="left", verticalalignment="center" ) # Case labels xloc = taylor_diagram.xymax*0.95 yloc = taylor_diagram.xymax*0.05 dy = taylor_diagram.xymax*0.05 for itest,testname in enumerate(testnames[::-1]): ax.text( xloc,yloc+itest*dy, # place these just above the dots testname, color=testcolors[::-1][itest], fontsize=11, horizontalalignment="right", verticalalignment="bottom", # fontweight='bold', # doesn't do anything ) mpl.rcParams['text.usetex'] = False ###Output _____no_output_____ ###Markdown Panel 4 (MISR low-topped thick and MODIS high-topped thick cloud) Open files ###Code og_clmisr = xr.open_dataset('%s/MISR.CLDLOW_THICK_MISR.nc' % (og_dir)) og_clmodis = xr.open_dataset('%s/MODIS.CLDHGH_THICK_MODIS.nc' % (og_dir)) og_clmisr = og_clmisr.rename({'CLDLOW_THICK_MISR':'CLDTHCK_MISR'}) og_clmodis = og_clmodis.rename({'CLDHGH_THICK_MODIS':'CLDTHCK_MODIS'}) ###Output _____no_output_____ ###Markdown Set-up ###Code # Control names dictionary (i.e. observations) cntlnames = { 'CLDTHCK_MISR': og_clmisr['CLDTHCK_MISR'], # these have to be dataarrays, not datasets 'CLDTHCK_MODIS': og_clmodis['CLDTHCK_MODIS'], } # suffixes = ['atm/proc/tseries/month_1','','atm/proc/tseries/month_1'] # paths to use if I have pr _vars = ['CLDTHCK_MISR','CLDTHCK_MODIS'] _sdir = '/glade/u/home/jonahshaw/w/archive/taylor_files/' model_das = {} # for j,_dir in zip(_vars,case_dirs): for j in _vars: print(j) var_files = [] for ii in cases: print(ii) # print(('%s/%s/*.%s.*' % (_sdir,ii,j))) _f = glob.glob('%s/%s/*.%s.*' % (_sdir,ii,j)) # get the correct file # open dataset _ds = xr.open_dataset(_f[0]) if j == 'CLDTHCK_MISR': _tvar = 'CLD_MISR' if j == 'CLDTHCK_MODIS': _tvar = 'CLMODIS' # Fix any weird month/year mismatch by weighting months equally. _da = _ds[_tvar].groupby('time.month').mean('time').mean('month') # Interpolate to the control (observation) grid _da = _da.interp_like(cntlnames[j],method='nearest') # _da = _da.interp_like(cntlnames[j]) var_files.append(_da) model_das[j] = var_files ###Output CLDTHCK_MISR cam4_1deg_release_amip cam5_1deg_release_amip f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1 CLDTHCK_MODIS cam4_1deg_release_amip cam5_1deg_release_amip f.e21.FHIST_BGC.f09_f09_mg17.CMIP6-AMIP.001_cosp1 ###Markdown Prep and plot ###Code # Control names dictionary (i.e. observations) cntlnames = { 'CLDTHCK_MISR': og_clmisr['CLDTHCK_MISR'], # these have to be dataarrays, not datasets 'CLDTHCK_MODIS': og_clmodis['CLDTHCK_MODIS'], } # Case names testnames = ('CAM4','CAM5','CAM6') testmetrics = model_das ###Output _____no_output_____ ###Markdown Calculate These are not being masked correctly. Should just use values below 60degrees. ###Code varnames = ['CLDTHCK_MISR','CLDTHCK_MODIS'] nvars = 2; ntest = 3; cc = np.zeros([nvars,ntest]) ratio = np.zeros([nvars,ntest]) bias = np.zeros([nvars,ntest]) for ivar,var in enumerate(varnames): # iterate over the variables for a specific Taylor plot # Select observational dataarray: obs_da = cntlnames[var] obs_ds = obs_da for itest,(name,metric) in enumerate(zip(testnames,testmetrics[var])): # iterate over the models to test/plot # Time average: test_ds = metric #[var] # Calculate Taylor diagram relevant variables: _bias,_corr,_rmsnorm,_ratio,_bias_abs = calculate(obs_ds,test_ds) # print(_bias[var],_corr[var],_rmsnorm[var],_ratio[var]) cc[ivar,itest] = _corr ratio[ivar,itest] = _ratio bias[ivar,itest] = _bias # print(bias,corr,rmsnorm,ratio) ###Output _____no_output_____ ###Markdown Plot ###Code mpl.rcParams['font.family'] = 'serif' mpl.rcParams['font.size'] = 10 mpl.rcParams['text.usetex'] = True figure = plt.figure(figsize=(8,8)) figure.set_dpi(200) testcolors = ('SkyBlue','Firebrick','#f6d921') ax = figure.add_subplot(2,2,1,frameon=False) taylor_diagram = taylor.Taylor_diagram( ax,cc,ratio,bias, casecolors=testcolors, varlabels=range(1,len(varnames)+1), ) # Reference bias bubbles, wut is this? ref_bias = 0.1 # This is a 10% bias reference bubble in the lower-left corner yloc = 0.05*taylor_diagram.xymax + ref_bias/2.0 xloc = 0.05*taylor_diagram.xymax + ref_bias/2.0 circle = patches.Circle( (xloc,yloc),ref_bias/2.0, color="black", alpha=0.30, ) ax.add_patch(circle) # Reference bias bubble points - centered at the reference bubble circle = patches.Circle( (xloc,yloc),0.01, color="black", ) ax.add_patch(circle) # Reference bias text ax.text( xloc+ref_bias/2.0 + 0.01*taylor_diagram.xymax,yloc, "%.0f%s bias"%(ref_bias*100,r"\%"), color="Black", fontsize=8, horizontalalignment="left", verticalalignment="center" ) # Case labels xloc = taylor_diagram.xymax*0.95 yloc = taylor_diagram.xymax*0.05 dy = taylor_diagram.xymax*0.05 for itest,testname in enumerate(testnames[::-1]): ax.text( xloc,yloc+itest*dy, # place these just above the dots testname, color=testcolors[::-1][itest], fontsize=11, horizontalalignment="right", verticalalignment="bottom", # fontweight='bold', # doesn't do anything ) mpl.rcParams['text.usetex'] = False ###Output _____no_output_____ ###Markdown Old ###Code # start_dir = '/glade/u/home/jonahshaw/w/archive/taylor_files' # for i in os.listdir(start_dir): # _path = '%s/%s' % (start_dir,i) # _file = os.listdir(_path)[0] # print(_file) # # print(os.listdir('%s/%s' % (start_dir,i))) # _temp = xr.open_dataset('%s/%s' % (_path,_file)) # _temp = _temp.rename({'CLD_MISR':'CLDTOT_MISR'}) # _temp.to_netcdf('%s/%s' % (_path,_file)+'1') ###Output _____no_output_____
1_image_classification/make_folders_and_data_downloads.ipynb
###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) ###Output _____no_output_____ ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile path = 'drive/MyDrive/pytorch_deeplearning/pytorch_advanced/1_image_classification/' # フォルダ「data」が存在しない場合は作成する data_dir = path+"data/" if not os.path.exists(data_dir): os.mkdir(data_dir) not os.path.exists(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) from google.colab import drive drive.mount('/content/drive') # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) ###Output _____no_output_____ ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) ###Output _____no_output_____ ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) ###Output _____no_output_____ ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) ###Output _____no_output_____ ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) ###Output hello ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) 【※(実施済み)】 ゴールデンリトリバーの画像を手動でダウンロード https://pixabay.com/ja/photos/goldenretriever-%E7%8A%AC-3724972/ の640×426サイズの画像 (画像権利情報:CC0 Creative Commons、商用利用無料、帰属表示は必要ありません) を、フォルダ「data」の直下に置く。 ###Output _____no_output_____ ###Markdown 以上 ###Code # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) ###Output _____no_output_____ ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) 【※(実施済み)】 ゴールデンリトリバーの画像を手動でダウンロード https://pixabay.com/ja/photos/goldenretriever-%E7%8A%AC-3724972/ の640×426サイズの画像 (画像権利情報:CC0 Creative Commons、商用利用無料、帰属表示は必要ありません) を、フォルダ「data」の直下に置く。 ###Output _____no_output_____ ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) ###Output _____no_output_____ ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) ###Output _____no_output_____ ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) ###Output _____no_output_____ ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) ###Output _____no_output_____ ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) ###Output _____no_output_____ ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) ###Output _____no_output_____ ###Markdown "1장 화상분류"의 준비 파일- 1장에서 사용하는 폴더를 만들고 파일을 다운로드합니다. ###Code import os import urllib.request import zipfile # data 폴더가 존재하지 않는 경우 작성한다 data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNet의 class_index를 다운로드한다 # Keras에서 제공하는 항목 # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3절에서 사용하는 개미와 벌의 화상 데이터를 다운로드하여 압축을 해제한다 # PyTorch의 튜토리얼로 제공되는 항목 # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIP 파일을 읽는다 zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIP을 압축 해제 zip.close() # ZIP 파일을 닫는다 # ZIP 파일을 삭제 os.remove(save_path) ###Output _____no_output_____ ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) 【※(実施済み)】 ゴールデンリトリバーの画像を手動でダウンロード https://pixabay.com/ja/photos/goldenretriever-%E7%8A%AC-3724972/ の640×426サイズの画像 (画像権利情報:CC0 Creative Commons、商用利用無料、帰属表示は必要ありません) を、フォルダ「data」の直下に置く。 ###Output _____no_output_____ ###Markdown 「第1章 画像分類」の準備ファイル- 本ファイルでは、第1章で使用するフォルダの作成とファイルのダウンロードを行います。 ###Code import os import urllib.request import zipfile # フォルダ「data」が存在しない場合は作成する data_dir = "./data/" if not os.path.exists(data_dir): os.mkdir(data_dir) # ImageNetのclass_indexをダウンロードする # Kerasで用意されているものです # https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json" save_path = os.path.join(data_dir, "imagenet_class_index.json") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # 1.3節で使用するアリとハチの画像データをダウンロードし解凍します # PyTorchのチュートリアルで用意されているものです # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html url = "https://download.pytorch.org/tutorial/hymenoptera_data.zip" save_path = os.path.join(data_dir, "hymenoptera_data.zip") if not os.path.exists(save_path): urllib.request.urlretrieve(url, save_path) # ZIPファイルを読み込み zip = zipfile.ZipFile(save_path) zip.extractall(data_dir) # ZIPを解凍 zip.close() # ZIPファイルをクローズ # ZIPファイルを消去 os.remove(save_path) ###Output _____no_output_____
convolutional_networks/week4/Face Recognition/Face_Recognition_v3a.ipynb
###Markdown Face RecognitionIn this assignment, you will build a face recognition system. Many of the ideas presented here are from [FaceNet](https://arxiv.org/pdf/1503.03832.pdf). In lecture, we also talked about [DeepFace](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf). Face recognition problems commonly fall into two categories: - **Face Verification** - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem. - **Face Recognition** - "who is this person?". For example, the video lecture showed a [face recognition video](https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem. FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person. **In this assignment, you will:**- Implement the triplet loss function- Use a pretrained model to map face images into 128-dimensional encodings- Use these encodings to perform face verification and face recognition Channels-first notation* In this exercise, we will be using a pre-trained model which represents ConvNet activations using a **"channels first"** convention, as opposed to the "channels last" convention used in lecture and previous programming assignments. * In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. * Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community. Updates If you were working on the notebook before this update...* The current notebook is version "3a".* You can find your original work saved in the notebook with the previous version name ("v3") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* `triplet_loss`: Additional Hints added.* `verify`: Hints added.* `who_is_it`: corrected hints given in the comments.* Spelling and formatting updates for easier reading. Load packagesLet's load the required packages. ###Code from keras.models import Sequential from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate from keras.models import Model from keras.layers.normalization import BatchNormalization from keras.layers.pooling import MaxPooling2D, AveragePooling2D from keras.layers.merge import Concatenate from keras.layers.core import Lambda, Flatten, Dense from keras.initializers import glorot_uniform from keras.engine.topology import Layer from keras import backend as K K.set_image_data_format('channels_first') import cv2 import os import numpy as np from numpy import genfromtxt import pandas as pd import tensorflow as tf from fr_utils import * from inception_blocks_v2 import * %matplotlib inline %load_ext autoreload %autoreload 2 np.set_printoptions(threshold=np.nan) ###Output Using TensorFlow backend. ###Markdown 0 - Naive Face VerificationIn Face Verification, you're given two images and you have to determine if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person! **Figure 1** * Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on. * You'll see that rather than using the raw image, you can learn an encoding, $f(img)$. * By using an encoding for each image, an element-wise comparison produces a more accurate judgement as to whether two pictures are of the same person. 1 - Encoding face images into a 128-dimensional vector 1.1 - Using a ConvNet to compute encodingsThe FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning, let's load weights that someone else has already trained. The network architecture follows the Inception model from [Szegedy *et al.*](https://arxiv.org/abs/1409.4842). We have provided an inception network implementation. You can look in the file `inception_blocks_v2.py` to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook. This opens the file directory that contains the '.py' file). The key things you need to know are:- This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$ - It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vectorRun the cell below to create the model for face images. ###Code FRmodel = faceRecoModel(input_shape=(3, 96, 96)) print("Total Params:", FRmodel.count_params()) ###Output Total Params: 3743280 ###Markdown ** Expected Output **Total Params: 3743280 By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings to compare two face images as follows: **Figure 2**: By computing the distance between two encodings and thresholding, you can determine if the two pictures represent the same personSo, an encoding is a good one if: - The encodings of two images of the same person are quite similar to each other. - The encodings of two images of different persons are very different.The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart. **Figure 3**: In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) 1.2 - The Triplet LossFor an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.<!--We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1).!-->Training will use triplets of images $(A, P, N)$: - A is an "Anchor" image--a picture of a person. - P is a "Positive" image--a picture of the same person as the Anchor image.- N is a "Negative" image--a picture of a different person than the Anchor image.These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example. You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$:$$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$You would thus like to minimize the following "triplet cost":$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}_\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}_\text{(2)} + \alpha \large ] \small_+ \tag{3}$$Here, we are using the notation "$[z]_+$" to denote $max(z,0)$. Notes:- The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small. - The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large. It has a minus sign preceding it because minimizing the negative of the term is the same as maximizing that term.- $\alpha$ is called the margin. It is a hyperparameter that you pick manually. We will use $\alpha = 0.2$. Most implementations also rescale the encoding vectors to haven L2 norm equal to one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that in this assignment.**Exercise**: Implement the triplet loss as defined by formula (3). Here are the 4 steps:1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$3. Compute the full formula by taking the max with zero and summing over the training examples:$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small_+ \tag{3}$$ Hints* Useful functions: `tf.reduce_sum()`, `tf.square()`, `tf.subtract()`, `tf.add()`, `tf.maximum()`.* For steps 1 and 2, you will sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$. * For step 4 you will sum over the training examples. Additional Hints* Recall that the square of the L2 norm is the sum of the squared differences: $||x - y||_{2}^{2} = \sum_{i=1}^{N}(x_{i} - y_{i})^{2}$* Note that the `anchor`, `positive` and `negative` encodings are of shape `(m,128)`, where m is the number of training examples and 128 is the number of elements used to encode a single example.* For steps 1 and 2, you will maintain the number of `m` training examples and sum along the 128 values of each encoding. [tf.reduce_sum](https://www.tensorflow.org/api_docs/python/tf/math/reduce_sum) has an `axis` parameter. This chooses along which axis the sums are applied. * Note that one way to choose the last axis in a tensor is to use negative indexing (`axis=-1`).* In step 4, when summing over training examples, the result will be a single scalar value.* For `tf.reduce_sum` to sum across all axes, keep the default value `axis=None`. ###Code # GRADED FUNCTION: triplet_loss def triplet_loss(y_true, y_pred, alpha = 0.2): """ Implementation of the triplet loss as defined by formula (3) Arguments: y_true -- true labels, required when you define a loss in Keras, you don't need it in this function. y_pred -- python list containing three objects: anchor -- the encodings for the anchor images, of shape (None, 128) positive -- the encodings for the positive images, of shape (None, 128) negative -- the encodings for the negative images, of shape (None, 128) Returns: loss -- real number, value of the loss """ anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2] ### START CODE HERE ### (≈ 4 lines) # Step 1: Compute the (encoding) distance between the anchor and the positive pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis=-1) # Step 2: Compute the (encoding) distance between the anchor and the negative neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis=-1) # Step 3: subtract the two previous distances and add alpha. basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha) # Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples. loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0), axis=None) ### END CODE HERE ### return loss with tf.Session() as test: tf.set_random_seed(1) y_true = (None, None, None) y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1), tf.random_normal([3, 128], mean=1, stddev=1, seed = 1), tf.random_normal([3, 128], mean=3, stddev=4, seed = 1)) loss = triplet_loss(y_true, y_pred) print("loss = " + str(loss.eval())) ###Output loss = 528.143 ###Markdown **Expected Output**: **loss** 528.143 2 - Loading the pre-trained modelFaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run. ###Code FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy']) load_weights_from_FaceNet(FRmodel) ###Output _____no_output_____ ###Markdown Here are some examples of distances between the encodings between three individuals: **Figure 4**: Example of distance outputs between three individuals' encodingsLet's now use this model to perform face verification and face recognition! 3 - Applying the model You are building a system for an office building where the building manager would like to offer facial recognition to allow the employees to enter the building.You'd like to build a **Face verification** system that gives access to the list of people who live or work there. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the entrance. The face recognition system then checks that they are who they claim to be. 3.1 - Face VerificationLet's build a database containing one encoding vector for each person who is allowed to enter the office. To generate the encoding we use `img_to_encoding(image_path, model)`, which runs the forward propagation of the model on the specified image. Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face. ###Code database = {} database["danielle"] = img_to_encoding("images/danielle.png", FRmodel) database["younes"] = img_to_encoding("images/younes.jpg", FRmodel) database["tian"] = img_to_encoding("images/tian.jpg", FRmodel) database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel) database["kian"] = img_to_encoding("images/kian.jpg", FRmodel) database["dan"] = img_to_encoding("images/dan.jpg", FRmodel) database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel) database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel) database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel) database["felix"] = img_to_encoding("images/felix.jpg", FRmodel) database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel) database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel) ###Output _____no_output_____ ###Markdown Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.**Exercise**: Implement the verify() function which checks if the front-door camera picture (`image_path`) is actually the person called "identity". You will have to go through the following steps:1. Compute the encoding of the image from `image_path`.2. Compute the distance between this encoding and the encoding of the identity image stored in the database.3. Open the door if the distance is less than 0.7, else do not open it.* As presented above, you should use the L2 distance [np.linalg.norm](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html). * (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.) Hints* `identity` is a string that is also a key in the `database` dictionary.* `img_to_encoding` has two parameters: the `image_path` and `model`. ###Code # GRADED FUNCTION: verify def verify(image_path, identity, database, model): """ Function that verifies if the person on the "image_path" image is "identity". Arguments: image_path -- path to an image identity -- string, name of the person you'd like to verify the identity. Has to be an employee who works in the office. database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors). model -- your Inception model instance in Keras Returns: dist -- distance between the image_path and the image of "identity" in the database. door_open -- True, if the door should open. False otherwise. """ ### START CODE HERE ### # Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line) encoding = img_to_encoding(image_path, model) # Step 2: Compute distance with identity's image (≈ 1 line) dist = np.linalg.norm(encoding - database[identity]) # Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines) if dist < 0.7: print("It's " + str(identity) + ", welcome in!") door_open = True else: print("It's not " + str(identity) + ", please go away") door_open = False ### END CODE HERE ### return dist, door_open ###Output _____no_output_____ ###Markdown Younes is trying to enter the office and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture: ###Code verify("images/camera_0.jpg", "younes", database, FRmodel) ###Output It's younes, welcome in! ###Markdown **Expected Output**: **It's younes, welcome in!** (0.65939283, True) Benoit, who does not work in the office, stole Kian's ID card and tried to enter the office. The camera took a picture of Benoit ("images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter. ###Code verify("images/camera_2.jpg", "kian", database, FRmodel) ###Output It's not kian, please go away ###Markdown **Expected Output**: **It's not kian, please go away** (0.86224014, False) 3.2 - Face RecognitionYour face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the office the next day and couldn't get in! To solve this, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the building, and the door will unlock for them! You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as one of the inputs. **Exercise**: Implement `who_is_it()`. You will have to go through the following steps:1. Compute the target encoding of the image from image_path2. Find the encoding from the database that has smallest distance with the target encoding. - Initialize the `min_dist` variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding. - Loop over the database dictionary's names and encodings. To loop use `for (name, db_enc) in database.items()`. - Compute the L2 distance between the target "encoding" and the current "encoding" from the database. - If this distance is less than the min_dist, then set `min_dist` to `dist`, and `identity` to `name`. ###Code # GRADED FUNCTION: who_is_it def who_is_it(image_path, database, model): """ Implements face recognition for the office by finding who is the person on the image_path image. Arguments: image_path -- path to an image database -- database containing image encodings along with the name of the person on the image model -- your Inception model instance in Keras Returns: min_dist -- the minimum distance between image_path encoding and the encodings from the database identity -- string, the name prediction for the person on image_path """ ### START CODE HERE ### ## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line) encoding = img_to_encoding(image_path, model) ## Step 2: Find the closest encoding ## # Initialize "min_dist" to a large value, say 100 (≈1 line) min_dist = 100 # Loop over the database dictionary's names and encodings. for (name, db_enc) in database.items(): # Compute L2 distance between the target "encoding" and the current db_enc from the database. (≈ 1 line) dist = np.linalg.norm(encoding - db_enc) # If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines) if dist < min_dist: min_dist = dist identity = name ### END CODE HERE ### if min_dist > 0.7: print("Not in the database.") else: print ("it's " + str(identity) + ", the distance is " + str(min_dist)) return min_dist, identity ###Output _____no_output_____ ###Markdown Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes. ###Code who_is_it("images/camera_0.jpg", database, FRmodel) ###Output it's younes, the distance is 0.659393
01_PREWORK/week03/pra/04-robin-hood/your-solution-here/030322_TAREA_Programa_RobinHood.ipynb
###Markdown 1. Robin Hood es famoso por acertar a una flecha con otra flecha. ¿Lo ha conseguido? ###Code repetido = [] unico = [] for i in puntos: if i not in unico: unico.append(i) else: if i not in repetido: repetido.append(i) print(f"Los valores repetidos son {repetido}") ###Output Los valores repetidos son [(4, 5), (-3, 2), (5, 7), (2, 2)] ###Markdown 2. Calcula cuántos flechazos han caido en cada cuadrante. ###Code cuadrante_1 = [] #Los dos valores positivos cuadrante_2 = [] #Primer valor negativo, segundo positivo cuadrante_3 = [] #Los dos valores negativos cuadrante_4 = [] # Primer valor positivo, segundo negativo for x,y in puntos: if x>=0 and y>0: cuadrante_1.append(1) elif x>0 and y<=0: cuadrante_4.append(1) elif x<=0 and y<0: cuadrante_3.append(1) elif x<0 and y>=0: cuadrante_2.append(1) print(f"La suma del primer cuadrante es {sum(cuadrante_1)}") print(f"La suma del segundo cuadrante es {sum(cuadrante_2)}") print(f"La suma del tercer cuadrante es {sum(cuadrante_3)}") print(f"La suma del cuarto cuadrante es {sum(cuadrante_4)}") ###Output La suma del primer cuadrante es 11 La suma del segundo cuadrante es 6 La suma del tercer cuadrante es 3 La suma del cuarto cuadrante es 2
_labs/Lab05/Lab05-MergingJoin.ipynb
###Markdown Lab 05: Merging and Joining Data (10 Bonus Points!)This lab is presented with some revisions from [Dennis Sun at Cal Poly](https://web.calpoly.edu/~dsun09/index.html) and his [Data301 Course](http://users.csc.calpoly.edu/~dsun09/data301/lectures.html) When you have filled out all the questions, submit via [Tulane Canvas](https://tulane.instructure.com/) In many situtions, the information you need is spread across multiple data sets, so you will need to combine multiple data sets into one. In this chapter, we explore how to combine information from multiple (tabular) data sets.As a working example, we will use the baby names data collected by the Social Security Administration. Each data set in this collection contains the names of all babies born in the United States in a particular year. This data is [publicly available](https://www.ssa.gov/OACT/babynames/limits.html), and a copy has been made available at `../data/names.zip`.**Note:** You will need to unzip this data into the directory where you are working to complete this lab!**Note:** If you are on a windows machine the command below won't work quite right, don't worry about it! ###Code import os os.listdir(os.path.join("..","data","names")) ###Output _____no_output_____ ###Markdown As you can see this data is broken up into a lot of individual files, but if we want to use any of our `groupby` and other analysis techniques we need to make it into one file! ConcatenationSometimes, the _rows_ of data are spread across multiple files, and we want to combine the rows into a single data set. The process of combining rows from different data sets is known as **concatenation**. Visually, to concatenate two or more `DataFrame`s means to stack them on top of one another.For example, suppose we want to understand how the popularity of different names evolved between 1995 and 2015. The 1995 names and the 2015 names are stored in two different files: `yob1995.txt` and `yob2015.txt`, respectively. To carry out this analysis, we will need to combine these two data sets into one. ###Code %matplotlib inline import pandas as pd # These two things are for Pandas, #it widens the notebook and lets us display data easily. from IPython.core.display import display, HTML display(HTML("<style>.container { width:95% !important; }</style>")) # Show a ludicrus number of rows and columns pd.options.display.max_rows = 500 pd.options.display.max_columns = 500 pd.options.display.width = 1000 names1995 = pd.read_csv("./data/names/yob1995.txt", header=None, names=["Name", "Sex", "Count"]) names1995.head() names2015 = pd.read_csv("./data/names/yob2015.txt", header=None, names=["Name", "Sex", "Count"]) names2015.head() ###Output _____no_output_____ ###Markdown To concatenate the two, we use the `pd.concat()` function, which accepts a _list_ of `pandas` objects (`DataFrames` or `Series`) and concatenates them. ###Code pd.concat([names1995, names2015]) ###Output _____no_output_____ ###Markdown There are two problems with the combined data set above. First, there is no longer any way to distinguish the 1995 data from the 2015 data. To fix this, we can add a "Year" column to each `DataFrame` before we concatenate. Second, the indexes from the individual `DataFrame`s have been preserved. (To see this, observe that the last index in the `DataFrame` is 32,951, which corresponds to the number of rows in `names2015`, but there are actually 59,032 rows in the `DataFrame`.) That means that there are two rows with an index of 0, two rows with an index of 1, and so on. To force `pandas` to create a completely new index for this `DataFrame`, ignoring the indices from the individual `DataFrame`s, we specify `ignore_index=True`. ###Code names1995["Year"] = 1995 names2015["Year"] = 2015 names = pd.concat([names1995, names2015], ignore_index=True) names ###Output _____no_output_____ ###Markdown Now this is a `DataFrame` that we can use!Notice that the data is currently in tabular form, with one row per combination of name, sex, and year. It makes sense to set these to be the index of our `DataFrame`. ###Code names.set_index(["Name", "Sex", "Year"], inplace=True) names.head() ###Output _____no_output_____ ###Markdown We may want to show the counts for the two years side by side. In other words, we want a data cube with (name, sex) along one axis and year along the other. To do this, we can `.unstack()` the year from the index. Note this is similar to a reverse Melt operation that we talked about in class -- a more tidy data way to do this may be to setup year as a multi index. ###Code names.unstack("Year").head() ###Output _____no_output_____ ###Markdown The `NaN`s simply indicate that there were no children (more precisely, if you read [the documentation](https://www.ssa.gov/OACT/babynames/limits.html), fewer than five children) born in the United States in that year. In this case, it makes sense to fill these `NaN` values with 0. ###Code names.unstack().fillna(0).head() ###Output _____no_output_____ ###Markdown Merging (a.k.a. Joining)More commonly, the data sets that we want to combine actually contain different information about the same observations. In other words, instead of stacking the `DataFrame`s on top of each other, as in concatenation, we want to stack them next to each other. The process of combining columns or variables from different data sets is known as **merging** or **joining**.The observations in the two data sets may not be in the same order, so merging is not as simple as stacking the `DataFrame`s side by side. For example, the process might look as follows:![](../images/one-to-one.png)In _pandas_, merging is accomplished using the `.merge()` function. We have to specify the variable(s) that we want to match across the two data sets. For example, to merge the 1995 names with the 2015 names, we have to join on name and sex. ###Code names1995.merge(names2015, on=["Name", "Sex"]).head() ###Output _____no_output_____ ###Markdown The variables `Name` and `Sex` that we joined on each appear once in the resulting `DataFrame`. The variable `Count`, which we did not join on, appears twice---since there are columns called `Count` in both `DataFrame`s. Notice that `pandas` automatically appended the suffix `_x` to the name of the variable from the left data set and `_y` to the name from the right. We can customize the suffixes by specifying the `suffixes=` argument. ###Code names1995.merge(names2015, on=["Name", "Sex"], suffixes=("1995", "2015")).head() ###Output _____no_output_____ ###Markdown In the code above, we assumed that the columns that we joined on had the same names in the two data sets. What if they had different names? For example, suppose the columns had been lowercase in one and uppercase in the other. We can specify which variables to use from the left and right data sets using the `left_on=` and `right_on=` arguments. ###Code # Create new DataFrames where the column names are different names1995_lower = names1995.copy() names2015_upper = names2015.copy() names1995_lower.columns = names1995.columns.str.lower() names2015_upper.columns = names2015.columns.str.upper() # This is how you merge them. names1995_lower.merge( names2015_upper, left_on=("name", "sex"), right_on=("NAME", "SEX") ).head() ###Output _____no_output_____ ###Markdown Note that here we've managed to get some redundant columns so we would need to drop these to keep our data tidy! What if the "variables" that we want to join on are in the index? We can always call `.reset_index()` to make them columns, but we can also specify the arguments `left_index=True` or `right_index=True` to force `pandas` to use the index instead of columns. Note that if we were to use the Pandas `join` command the default action would be to join on the indicies. ###Code names1995_idx = names1995.set_index(["Name", "Sex"]) names1995_idx.head() names1995_idx.merge(names2015, left_index=True, right_on=("Name", "Sex")).head() ###Output _____no_output_____ ###Markdown Note that this worked because the left `DataFrame` had an index with two levels, which were joined to two columns from the right `DataFrame`. One-to-One and Many-to-One RelationshipsIn the example above, there was at most one (name, sex) combination in the 2015 data set for each (name, sex) combination in the 1995 data set. These two data sets are thus said to have a **one-to-one relationship**. Another example of a one-to-one data set is the Beatles example from above. Each Beatle appears in each data set exactly once, so the name is uniquely identifying.![](../images/one-to-one.png)However, two data sets need not have a one-to-one relationship. For example, a data set that specifies the instrument(s) that each Beatle played would potentially feature each Beatle multiple times (if they played multiple instruments). If we joined this data set to the "Beatles career" data set, then each row in the "Beatles career" data set would be mapped to several rows in the "instruments" data set. These two data sets are said to have a **many-to-one relationship**.![](../images/many-to-one.png) Many-to-Many Relationships: A Cautionary TaleIn the baby names data, the name is not uniquely identifying. For example, there are both males and females with the name "Jessie". ###Code jessie1995 = names1995[names1995["Name"] == "Jessie"] jessie2015 = names2015[names2015["Name"] == "Jessie"] jessie1995 ###Output _____no_output_____ ###Markdown That is why we have to be sure to join on both name and sex. But what would go wrong if we joined these two `DataFrame`s on just "Name"? Let's try it out: ###Code jessie1995.merge(jessie2015, on=["Name"]) ###Output _____no_output_____ ###Markdown We see that Jessie ends up appearing four times.- Female Jessies from 1995 are matched with female Jessies from 2015. (Good!)- Male Jessies from 1995 are matched with male Jessies from 2015. (Good!)- Female Jessies from 1995 are matched with male Jessies from 2015. (Huh?)- Male Jessies from 1995 are matched with female Jessies from 2015. (Huh?)The problem is that there were multiple Jessies in the 1995 data and multiple Jessies in the 2015 data. We say that these two data sets have a **many-to-many relationship**. Joining DataIn the previous section, we discussed how to _merge_ (or _join_) two data sets by matching on certain variables. But what happens when no match can be found for a row in one `DataFrame`? First, let's determine how _pandas_ handles this situation by default. The name "Nevaeh", which is "Heaven" spelled backwards, is said to have taken off when Sonny Sandoval of the band P.O.D. gave his daughter the name in 2000. Let's look at how common this name was four years earlier and four years after. ###Code names1996 = pd.read_csv("./data/names/yob1996.txt", header=None, names=["Name", "Sex", "Count"]) names2004 = pd.read_csv("./data/names/yob2004.txt", header=None, names=["Name", "Sex", "Count"]) names1996[names1996.Name == "Nevaeh"] names2004[names2004.Name == "Nevaeh"] ###Output _____no_output_____ ###Markdown In 1996, there were no girls (or fewer than 5) named Nevaeh; just eight years later, there were over 3000 girls (and 27 boys) with the name. It seems like Sonny Sandoval had a huge effect.What will happen to the name "Nevaeh" when we merge the two data sets? ###Code names = names1996.merge(names2004, on=["Name", "Sex"], suffixes=("1996", "2004")) names[names.Name == "Nevaeh"] ###Output _____no_output_____ ###Markdown By default, _pandas_ only includes combinations that are present in _both_ `DataFrame`s. If it cannot find a match for a row in one `DataFrame`, then the combination is simply dropped. But in this context, the fact that a name does not appear in one data set is informative. It means that no babies were born in that year with that name. (Technically, it means that fewer than 5 babies were born with that name, as any name that was assigned fewer than 5 times is omitted for privacy reasons.) We might want to include names that appeared in only one of the two `DataFrame`s, rather than just the names that appeared in both. There are four types of joins, distinguished by whether they include names from the left `DataFrame`, the right `DataFrame`, both, or neither:1. **inner join** (default): only values that are present in _both_ `DataFrame`s are included in the result2. **outer join**: any value that appears in _either_ `DataFrame` is included in the result3. **left join**: any value that appears in the _left_ `DataFrame` is included in the result, whether or not it appears in the right `DataFrame`4. **right join**: any value that appears in the _right_ `DataFrame` is included in the result, whether or not it appears in the left `DataFrame`.In _pandas_, the join type is specified using the `how=` argument.Now let's look at examples of each of these types of joins. ###Code # inner join names_inner = names1996.merge(names2004, on=["Name", "Sex"], how="inner", suffixes=("1996", "2004")) names_inner.head() # outer join names_outer = names1996.merge(names2004, on=["Name", "Sex"], how="outer", suffixes=("1996", "2004")) names_outer.head() ###Output _____no_output_____ ###Markdown Names like "Zyrell" and "Zyron" appeared in the 2004 data but not the 1996 data. For this reason, their count in 1996 is `NaN`. In general, there will be `NaN`s in a `DataFrame` resulting from an outer join. Any time a name appears in one `DataFrame` but not the other, there will be `NaN`s in the columns from the `DataFrame` whose data is missing. ###Code names_outer.isnull().sum() ###Output _____no_output_____ ###Markdown By contrast, there are no `NaN`s when we do an inner join. That is because we restrict to only the (name, sex) pairs that appeared in both `DataFrame`s, so we have counts for both 1996 and 2014. ###Code names_inner.isnull().sum() ###Output _____no_output_____ ###Markdown Left and right joins preserve data from one `DataFrame` but not the other. For example, if we were trying to calculate the percentage change for each name from 1996 to 2004, we would want to include all of the names that appeared in the 1996 data. If the name did not appear in the 2004 data, then that is informative. ###Code # left join names_left = names1996.merge(names2004, on=["Name", "Sex"], how="left", suffixes=("1996", "2004")) names_left.head() ###Output _____no_output_____ ###Markdown The result of a left join has `NaN`s in the column from the right `DataFrame`. ###Code names_left.isnull().sum() ###Output _____no_output_____ ###Markdown The result of a right join, on the other hand, has `NaN`s in the column from the left `DataFrame`. ###Code # right join names_right = names1996.merge(names2004, on=["Name", "Sex"], how="right", suffixes=("1996", "2004")) names_right.head() names_right.isnull().sum() ###Output _____no_output_____ ###Markdown One way to visualize the different types of joins is using Venn diagrams. The shaded circles specify which values are included in the output.![](../images/joins.jpeg) Exercises **Exercise 1.** Make a line plot showing the popularity of your name over the years. Make sure you include all the years in the dataset! You'll need to write some code to make sure you open **all** the year datafiles.(**BONUS Extra Credit (2 points)**: As an added challenge, try marking the year you were born with a graphic element.)(If you have a rare name that does not appear in the data set, choose a friend's name.) ###Code # TYPE YOUR CODE HERE ###Output _____no_output_____ ###Markdown Exercises 2-4 deal with the [Movielens data 1M Dataset](https://grouplens.org/datasets/movielens/1m/) which has been copied into the Github for this class. This dataset is a collection of movie ratings submitted by users. The information about the movies, ratings, and users are stored in three separate files, called `movies.dat`, `ratings.dat`, and `users.dat`. The column names are not included with the data files. Refer to the data documentation (`./data/movielens/README`) for the column names and how the columns correspond across the data sets.For the first part of this excersize you need to open these datafiles, make sure the column headders are correct, and merge them into a single DataFrame to answer the questions. Take note of the seperators in the data and maybe look at the documentation for [Pandas read_csv()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) for some hits. **Exercise 2.** Who's more generous with ratings: males or females? Calculate the average of the ratings given by male users, and the average of the ratings given by female users. ###Code # TYPE YOUR CODE HERE ###Output _____no_output_____ ###Markdown **Exercise 3.** Calculate the number of ratings for each of the movies. How many of the movies had zero ratings?(_Hint_: You may need to use operations on the ratings table first.)(_Hint_: Why is an inner join not sufficient here?) ###Code # TYPE YOUR CODE HERE ###Output _____no_output_____ ###Markdown **Exercise 4.** How many movies received both a 1 and a 5 rating? Do this by creating and joining two appropriate tables.(*Hint:* The [Pandas unique()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.unique.html) function may be nice here...) ###Code # TYPE YOUR CODE HERE ###Output _____no_output_____ ###Markdown **Exercise 5.** Among movies with at least 100 ratings, which movie had the highest average rating? (**Hint:** Try filtering the dataframe before using other commands.) ###Code # TYPE YOUR CODE HERE ###Output _____no_output_____ ###Markdown **BONUS BONUS 8 POINTS.** For each movie, calculate the average age of the users who rated it and the average rating. Make a scatterplot showing the relationship between age and rating, with each point representing a movie. Use the size of each point to represent the number of users who rated the movie. ###Code # TYPE YOUR CODE HERE ###Output _____no_output_____ ###Markdown Lab 05: Merging and Joining Data (15 Possible Bonus Points!)This lab is presented with some revisions from [Dennis Sun at Cal Poly](https://web.calpoly.edu/~dsun09/index.html) and his [Data301 Course](http://users.csc.calpoly.edu/~dsun09/data301/lectures.html) When you have filled out all the questions, submit via [Tulane Canvas](https://tulane.instructure.com/) In many situtions, the information you need is spread across multiple data sets, so you will need to combine multiple data sets into one. In this chapter, we explore how to combine information from multiple (tabular) data sets.As a working example, we will use the baby names data collected by the Social Security Administration. Each data set in this collection contains the names of all babies born in the United States in a particular year. This data is [publicly available](https://www.ssa.gov/OACT/babynames/limits.html), and a copy has been made available at `../data/names.zip`.**Note:** You will need to unzip this data into the directory where you are working to complete this lab!**Note:** If you are on a windows machine the command below won't work quite right, don't worry about it! ###Code import os os.listdir(os.path.join("..","data","names")) ###Output _____no_output_____ ###Markdown As you can see this data is broken up into a lot of individual files, but if we want to use any of our `groupby` and other analysis techniques we need to make it into one file! ConcatenationSometimes, the _rows_ of data are spread across multiple files, and we want to combine the rows into a single data set. The process of combining rows from different data sets is known as **concatenation**. Visually, to concatenate two or more `DataFrame`s means to stack them on top of one another.For example, suppose we want to understand how the popularity of different names evolved between 1995 and 2015. The 1995 names and the 2015 names are stored in two different files: `yob1995.txt` and `yob2015.txt`, respectively. To carry out this analysis, we will need to combine these two data sets into one. ###Code %matplotlib inline import pandas as pd # These two things are for Pandas, #it widens the notebook and lets us display data easily. from IPython.core.display import display, HTML display(HTML("<style>.container { width:95% !important; }</style>")) # Show a ludicrus number of rows and columns pd.options.display.max_rows = 500 pd.options.display.max_columns = 500 pd.options.display.width = 1000 names1995 = pd.read_csv("../data/names/yob1995.txt", header=None, names=["Name", "Sex", "Count"]) names1995.head() names2015 = pd.read_csv("../data/names/yob2015.txt", header=None, names=["Name", "Sex", "Count"]) names2015.head() ###Output _____no_output_____ ###Markdown To concatenate the two, we use the `pd.concat()` function, which accepts a _list_ of `pandas` objects (`DataFrames` or `Series`) and concatenates them. ###Code pd.concat([names1995, names2015]) ###Output _____no_output_____ ###Markdown There are two problems with the combined data set above. First, there is no longer any way to distinguish the 1995 data from the 2015 data. To fix this, we can add a "Year" column to each `DataFrame` before we concatenate. Second, the indexes from the individual `DataFrame`s have been preserved. (To see this, observe that the last index in the `DataFrame` is 32,951, which corresponds to the number of rows in `names2015`, but there are actually 59,032 rows in the `DataFrame`.) That means that there are two rows with an index of 0, two rows with an index of 1, and so on. To force `pandas` to create a completely new index for this `DataFrame`, ignoring the indices from the individual `DataFrame`s, we specify `ignore_index=True`. ###Code names1995["Year"] = 1995 names2015["Year"] = 2015 names = pd.concat([names1995, names2015], ignore_index=True) names ###Output _____no_output_____ ###Markdown Now this is a `DataFrame` that we can use!Notice that the data is currently in tabular form, with one row per combination of name, sex, and year. It makes sense to set these to be the index of our `DataFrame`. ###Code names.set_index(["Name", "Sex", "Year"], inplace=True) names.head() ###Output _____no_output_____ ###Markdown We may want to show the counts for the two years side by side. In other words, we want a data cube with (name, sex) along one axis and year along the other. To do this, we can `.unstack()` the year from the index. Note this is similar to a reverse Melt operation that we talked about in class -- a more tidy data way to do this may be to setup year as a multi index. ###Code names.unstack("Year").head() ###Output _____no_output_____ ###Markdown The `NaN`s simply indicate that there were no children (more precisely, if you read [the documentation](https://www.ssa.gov/OACT/babynames/limits.html), fewer than five children) born in the United States in that year. In this case, it makes sense to fill these `NaN` values with 0. ###Code names.unstack().fillna(0).head() ###Output _____no_output_____ ###Markdown Merging (a.k.a. Joining)More commonly, the data sets that we want to combine actually contain different information about the same observations. In other words, instead of stacking the `DataFrame`s on top of each other, as in concatenation, we want to stack them next to each other. The process of combining columns or variables from different data sets is known as **merging** or **joining**.The observations in the two data sets may not be in the same order, so merging is not as simple as stacking the `DataFrame`s side by side. For example, the process might look as follows:![](../images/one-to-one.png)In _pandas_, merging is accomplished using the `.merge()` function. We have to specify the variable(s) that we want to match across the two data sets. For example, to merge the 1995 names with the 2015 names, we have to join on name and sex. ###Code names1995.merge(names2015, on=["Name", "Sex"]).head() ###Output _____no_output_____ ###Markdown The variables `Name` and `Sex` that we joined on each appear once in the resulting `DataFrame`. The variable `Count`, which we did not join on, appears twice---since there are columns called `Count` in both `DataFrame`s. Notice that `pandas` automatically appended the suffix `_x` to the name of the variable from the left data set and `_y` to the name from the right. We can customize the suffixes by specifying the `suffixes=` argument. ###Code names1995.merge(names2015, on=["Name", "Sex"], suffixes=("1995", "2015")).head() ###Output _____no_output_____ ###Markdown In the code above, we assumed that the columns that we joined on had the same names in the two data sets. What if they had different names? For example, suppose the columns had been lowercase in one and uppercase in the other. We can specify which variables to use from the left and right data sets using the `left_on=` and `right_on=` arguments. ###Code # Create new DataFrames where the column names are different names1995_lower = names1995.copy() names2015_upper = names2015.copy() names1995_lower.columns = names1995.columns.str.lower() names2015_upper.columns = names2015.columns.str.upper() # This is how you merge them. names1995_lower.merge( names2015_upper, left_on=("name", "sex"), right_on=("NAME", "SEX") ).head() ###Output _____no_output_____ ###Markdown Note that here we've managed to get some redundant columns so we would need to drop these to keep our data tidy! What if the "variables" that we want to join on are in the index? We can always call `.reset_index()` to make them columns, but we can also specify the arguments `left_index=True` or `right_index=True` to force `pandas` to use the index instead of columns. Note that if we were to use the Pandas `join` command the default action would be to join on the indicies. ###Code names1995_idx = names1995.set_index(["Name", "Sex"]) names1995_idx.head() names1995_idx.merge(names2015, left_index=True, right_on=("Name", "Sex")).head() ###Output _____no_output_____ ###Markdown Note that this worked because the left `DataFrame` had an index with two levels, which were joined to two columns from the right `DataFrame`. One-to-One and Many-to-One RelationshipsIn the example above, there was at most one (name, sex) combination in the 2015 data set for each (name, sex) combination in the 1995 data set. These two data sets are thus said to have a **one-to-one relationship**. Another example of a one-to-one data set is the Beatles example from above. Each Beatle appears in each data set exactly once, so the name is uniquely identifying.![](../images/one-to-one.png)However, two data sets need not have a one-to-one relationship. For example, a data set that specifies the instrument(s) that each Beatle played would potentially feature each Beatle multiple times (if they played multiple instruments). If we joined this data set to the "Beatles career" data set, then each row in the "Beatles career" data set would be mapped to several rows in the "instruments" data set. These two data sets are said to have a **many-to-one relationship**.![](../images/many-to-one.png) Many-to-Many Relationships: A Cautionary TaleIn the baby names data, the name is not uniquely identifying. For example, there are both males and females with the name "Jessie". ###Code jessie1995 = names1995[names1995["Name"] == "Jessie"] jessie2015 = names2015[names2015["Name"] == "Jessie"] jessie1995 ###Output _____no_output_____ ###Markdown That is why we have to be sure to join on both name and sex. But what would go wrong if we joined these two `DataFrame`s on just "Name"? Let's try it out: ###Code jessie1995.merge(jessie2015, on=["Name"]) ###Output _____no_output_____ ###Markdown We see that Jessie ends up appearing four times.- Female Jessies from 1995 are matched with female Jessies from 2015. (Good!)- Male Jessies from 1995 are matched with male Jessies from 2015. (Good!)- Female Jessies from 1995 are matched with male Jessies from 2015. (Huh?)- Male Jessies from 1995 are matched with female Jessies from 2015. (Huh?)The problem is that there were multiple Jessies in the 1995 data and multiple Jessies in the 2015 data. We say that these two data sets have a **many-to-many relationship**. Joining DataIn the previous section, we discussed how to _merge_ (or _join_) two data sets by matching on certain variables. But what happens when no match can be found for a row in one `DataFrame`? First, let's determine how _pandas_ handles this situation by default. The name "Nevaeh", which is "Heaven" spelled backwards, is said to have taken off when Sonny Sandoval of the band P.O.D. gave his daughter the name in 2000. Let's look at how common this name was four years earlier and four years after. ###Code names1996 = pd.read_csv("../data/names/yob1996.txt", header=None, names=["Name", "Sex", "Count"]) names2004 = pd.read_csv("../data/names/yob2004.txt", header=None, names=["Name", "Sex", "Count"]) names1996[names1996.Name == "Nevaeh"] names2004[names2004.Name == "Nevaeh"] ###Output _____no_output_____ ###Markdown In 1996, there were no girls (or fewer than 5) named Nevaeh; just eight years later, there were over 3000 girls (and 27 boys) with the name. It seems like Sonny Sandoval had a huge effect.What will happen to the name "Nevaeh" when we merge the two data sets? ###Code names = names1996.merge(names2004, on=["Name", "Sex"], suffixes=("1996", "2004")) names[names.Name == "Nevaeh"] ###Output _____no_output_____ ###Markdown By default, _pandas_ only includes combinations that are present in _both_ `DataFrame`s. If it cannot find a match for a row in one `DataFrame`, then the combination is simply dropped. But in this context, the fact that a name does not appear in one data set is informative. It means that no babies were born in that year with that name. (Technically, it means that fewer than 5 babies were born with that name, as any name that was assigned fewer than 5 times is omitted for privacy reasons.) We might want to include names that appeared in only one of the two `DataFrame`s, rather than just the names that appeared in both. There are four types of joins, distinguished by whether they include names from the left `DataFrame`, the right `DataFrame`, both, or neither:1. **inner join** (default): only values that are present in _both_ `DataFrame`s are included in the result2. **outer join**: any value that appears in _either_ `DataFrame` is included in the result3. **left join**: any value that appears in the _left_ `DataFrame` is included in the result, whether or not it appears in the right `DataFrame`4. **right join**: any value that appears in the _right_ `DataFrame` is included in the result, whether or not it appears in the left `DataFrame`.In _pandas_, the join type is specified using the `how=` argument.Now let's look at examples of each of these types of joins. ###Code # inner join names_inner = names1996.merge(names2004, on=["Name", "Sex"], how="inner", suffixes=("1996", "2004")) names_inner.head() # outer join names_outer = names1996.merge(names2004, on=["Name", "Sex"], how="outer", suffixes=("1996", "2004")) names_outer.head() ###Output _____no_output_____ ###Markdown Names like "Zyrell" and "Zyron" appeared in the 2004 data but not the 1996 data. For this reason, their count in 1996 is `NaN`. In general, there will be `NaN`s in a `DataFrame` resulting from an outer join. Any time a name appears in one `DataFrame` but not the other, there will be `NaN`s in the columns from the `DataFrame` whose data is missing. ###Code names_outer.isnull().sum() ###Output _____no_output_____ ###Markdown By contrast, there are no `NaN`s when we do an inner join. That is because we restrict to only the (name, sex) pairs that appeared in both `DataFrame`s, so we have counts for both 1996 and 2014. ###Code names_inner.isnull().sum() ###Output _____no_output_____ ###Markdown Left and right joins preserve data from one `DataFrame` but not the other. For example, if we were trying to calculate the percentage change for each name from 1996 to 2004, we would want to include all of the names that appeared in the 1996 data. If the name did not appear in the 2004 data, then that is informative. ###Code # left join names_left = names1996.merge(names2004, on=["Name", "Sex"], how="left", suffixes=("1996", "2004")) names_left.head() ###Output _____no_output_____ ###Markdown The result of a left join has `NaN`s in the column from the right `DataFrame`. ###Code names_left.isnull().sum() ###Output _____no_output_____ ###Markdown The result of a right join, on the other hand, has `NaN`s in the column from the left `DataFrame`. ###Code # right join names_right = names1996.merge(names2004, on=["Name", "Sex"], how="right", suffixes=("1996", "2004")) names_right.head() names_right.isnull().sum() ###Output _____no_output_____ ###Markdown One way to visualize the different types of joins is using Venn diagrams. The shaded circles specify which values are included in the output.![](../images/joins.jpeg) Exercises **Exercise 1.** Make a line plot showing the popularity of your name over the years. Make sure you include all the years in the dataset! You'll need to write some code to make sure you open **all** the year datafiles.(**BONUS Extra Credit (3 points)**: As an added challenge, try marking the year you were born with a graphic element.)(**BONUS Extra Credit (2 points)**: As an additional added challenge, also plot 3 friends or relatives along with their birthdays and sequences on the same graph.)(If you have a rare name that does not appear in the data set, choose a friend's name.) ###Code # Correctly assign file names all_csvs = [] for file in os.listdir(os.path.join("..","data","names"))[1:]: all_csvs.append("../data/names/%s" % file) # Get a DataFrame of men named Samuel and Michael and women named Lily and Carly every year samuel_dfs = [] lily_dfs = [] michael_dfs = [] carly_dfs = [] year = 1880 for file in all_csvs: temp_df = pd.read_csv(file, header=None, names=["Name", "Sex", "Count"]) temp_df["Year"] = year temp_df.set_index("Year", inplace=True) samuel_df = temp_df[(temp_df["Name"] == "Samuel") & (temp_df["Sex"] == "M")] samuel_dfs.append(samuel_df) lily_df = temp_df[(temp_df["Name"] == "Lily") & (temp_df["Sex"] == "F")] lily_dfs.append(lily_df) michael_df = temp_df[(temp_df["Name"] == "Michael") & (temp_df["Sex"] == "M")] michael_dfs.append(michael_df) carly_df = temp_df[(temp_df["Name"] == "Carly") & (temp_df["Sex"] == "F")] carly_dfs.append(carly_df) year += 1 all_samuels = pd.concat(samuel_dfs) #display(all_samuels.head()) all_lilys = pd.concat(lily_dfs) #display(all_lilys.head()) all_michaels = pd.concat(michael_dfs) #display(all_michaels.head()) all_carlys = pd.concat(carly_dfs) #display(all_carlys.head()) # Get y-coordinates for plot below print(all_samuels.loc[2000]) print(all_lilys.loc[1999]) print(all_michaels.loc[2000]) print(all_carlys.loc[2004]) # Make a line plot all_samuels["Count"].plot.line(ylabel="Count", legend=True, label="Samuel") all_lilys["Count"].plot.line(legend=True, label="Lily") all_michaels["Count"].plot.line(legend=True, label="Michael") plt = all_carlys["Count"].plot.line(legend=True, label="Carly") plt.scatter(2000, 14171, s=100, marker="*", color="blue") plt.scatter(1999, 2123, s=100, marker="*", color="orange") plt.scatter(2000, 32037, s=100, marker="*", color="green") plt.scatter(2004, 1925, s=100, marker="*", color="red") ###Output _____no_output_____ ###Markdown Exercises 2-4 deal with the [Movielens data 1M Dataset](https://grouplens.org/datasets/movielens/1m/) which has been copied into the Github for this class. This dataset is a collection of movie ratings submitted by users. The information about the movies, ratings, and users are stored in three separate files, called `movies.dat`, `ratings.dat`, and `users.dat`. The column names are not included with the data files. Refer to the data documentation (`./data/movielens/README`) for the column names and how the columns correspond across the data sets.For the first part of this excersize you need to open these datafiles, make sure the column headders are correct, and merge them into a single DataFrame to answer the questions. Take note of the seperators in the data and maybe look at the documentation for [Pandas read_csv()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) for some hits. ###Code import numpy as np movies_df = pd.read_csv("../data/movielens/movies.dat", header=None, names=["MovieID", "Title", "Genres"], engine="python", encoding='latin-1', delimiter="::") ratings_df = pd.read_csv("../data/movielens/ratings.dat", header=None, names=["UserID", "MovieID", "Rating", "Timestamp"], engine="python", encoding='latin-1', delimiter="::") users_df = pd.read_csv("../data/movielens/users.dat", header=None, names=["UserID", "Gender", "Age", "Occupation", "Zip-code"], engine="python", encoding='latin-1', delimiter="::") # Merging users and ratings ru_temp = users_df.merge(ratings_df, on="UserID", how="right") # Merging movies in full_df = ru_temp.merge(movies_df, on="MovieID", how="outer") full_df.head() ###Output _____no_output_____ ###Markdown **Exercise 2.** Who's more generous with ratings: males or females? Calculate the average of the ratings given by male users, and the average of the ratings given by female users. ###Code print(full_df[full_df["Gender"] == "M"]["Rating"].mean()) # 3.569 full_df[full_df["Gender"] == "F"]["Rating"].mean() # 3.620 # Females are more generous with ratings. ###Output 3.5688785290984373 ###Markdown **Exercise 3.** Calculate the number of ratings for each of the movies. How many of the movies had zero ratings?(_Hint_: You may need to use operations on the ratings table first.)(_Hint_: Why is an inner join not sufficient here?) ###Code movie_counts = full_df.value_counts("MovieID") print(movie_counts) full_df.isnull().sum() # There are 177 movies with no ratings. ###Output MovieID 2858 3428 260 2991 1196 2990 1210 2883 480 2672 ... 1470 1 3226 1 773 1 772 1 1579 1 Length: 3883, dtype: int64 ###Markdown **Exercise 4.** How many movies received both a 1 and a 5 rating? Do this by creating and joining two appropriate tables.(*Hint:* The [Pandas unique()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.unique.html) function may be nice here...) ###Code rating_1 = full_df[full_df["Rating"] == 1.0] rating_5 = full_df[full_df["Rating"] == 5.0] both_ratings = rating_1.merge(rating_5, how="inner", on="MovieID", suffixes=("1","5")) len(both_ratings["MovieID"].unique()) # There are 2,986 movies that have a rating of 1 and 5. ###Output _____no_output_____ ###Markdown **Exercise 5.** Among movies with at least 100 ratings, which movie had the highest average rating? (**Hint:** Try filtering the dataframe before using other commands.) ###Code # Make a database with movies with at least 100 reviews movies_100 = full_df.groupby(["MovieID"]).count() movies_100 = movies_100[movies_100["Rating"] >= 100] final = movies_100.merge(full_df, on="MovieID", how="left", suffixes=("Count","")) # Cleanup final = final.drop(columns=["UserIDCount", "GenderCount", "AgeCount", "OccupationCount", "Zip-codeCount", "TimestampCount", "TitleCount", "GenresCount"]) # Get highest rating final.groupby(["MovieID"]).Rating.mean().sort_values() movies_df.loc[2019] # Seven Samurai is the highest rated movie with over 100 ratings: 4.561. ###Output _____no_output_____ ###Markdown (**BONUS Extra Credit (8 points)**: For each movie, calculate the average age of the users who rated it and the average rating. Make a scatterplot showing the relationship between age and rating, with each point representing a movie. Use the size of each point to represent the number of users who rated the movie.(**BONUS Extra Credit (2 points)**: To this plot annotate at least two movies that you like in the graph. You can either make them a different color with a key or add a line and mark them. [This will really test your skill with MatPlotLib.](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.annotate.html) ###Code # TYPE YOUR CODE HERE ###Output _____no_output_____ ###Markdown Lab 05: Merging and Joining Data (15 Possible Bonus Points!)This lab is presented with some revisions from [Dennis Sun at Cal Poly](https://web.calpoly.edu/~dsun09/index.html) and his [Data301 Course](http://users.csc.calpoly.edu/~dsun09/data301/lectures.html) When you have filled out all the questions, submit via [Tulane Canvas](https://tulane.instructure.com/) In many situtions, the information you need is spread across multiple data sets, so you will need to combine multiple data sets into one. In this chapter, we explore how to combine information from multiple (tabular) data sets.As a working example, we will use the baby names data collected by the Social Security Administration. Each data set in this collection contains the names of all babies born in the United States in a particular year. This data is [publicly available](https://www.ssa.gov/OACT/babynames/limits.html), and a copy has been made available at `../data/names.zip`.**Note:** You will need to unzip this data into the directory where you are working to complete this lab!**Note:** If you are on a windows machine the command below won't work quite right, don't worry about it! ###Code import os os.listdir(os.path.join("..","data","names")) ###Output _____no_output_____ ###Markdown As you can see this data is broken up into a lot of individual files, but if we want to use any of our `groupby` and other analysis techniques we need to make it into one file! ConcatenationSometimes, the _rows_ of data are spread across multiple files, and we want to combine the rows into a single data set. The process of combining rows from different data sets is known as **concatenation**. Visually, to concatenate two or more `DataFrame`s means to stack them on top of one another.For example, suppose we want to understand how the popularity of different names evolved between 1995 and 2015. The 1995 names and the 2015 names are stored in two different files: `yob1995.txt` and `yob2015.txt`, respectively. To carry out this analysis, we will need to combine these two data sets into one. ###Code %matplotlib inline import pandas as pd # These two things are for Pandas, #it widens the notebook and lets us display data easily. from IPython.core.display import display, HTML display(HTML("<style>.container { width:95% !important; }</style>")) # Show a ludicrus number of rows and columns pd.options.display.max_rows = 500 pd.options.display.max_columns = 500 pd.options.display.width = 1000 names1995 = pd.read_csv("./data/names/yob1995.txt", header=None, names=["Name", "Sex", "Count"]) names1995.head() names2015 = pd.read_csv("./data/names/yob2015.txt", header=None, names=["Name", "Sex", "Count"]) names2015.head() ###Output _____no_output_____ ###Markdown To concatenate the two, we use the `pd.concat()` function, which accepts a _list_ of `pandas` objects (`DataFrames` or `Series`) and concatenates them. ###Code pd.concat([names1995, names2015]) ###Output _____no_output_____ ###Markdown There are two problems with the combined data set above. First, there is no longer any way to distinguish the 1995 data from the 2015 data. To fix this, we can add a "Year" column to each `DataFrame` before we concatenate. Second, the indexes from the individual `DataFrame`s have been preserved. (To see this, observe that the last index in the `DataFrame` is 32,951, which corresponds to the number of rows in `names2015`, but there are actually 59,032 rows in the `DataFrame`.) That means that there are two rows with an index of 0, two rows with an index of 1, and so on. To force `pandas` to create a completely new index for this `DataFrame`, ignoring the indices from the individual `DataFrame`s, we specify `ignore_index=True`. ###Code names1995["Year"] = 1995 names2015["Year"] = 2015 names = pd.concat([names1995, names2015], ignore_index=True) names ###Output _____no_output_____ ###Markdown Now this is a `DataFrame` that we can use!Notice that the data is currently in tabular form, with one row per combination of name, sex, and year. It makes sense to set these to be the index of our `DataFrame`. ###Code names.set_index(["Name", "Sex", "Year"], inplace=True) names.head() ###Output _____no_output_____ ###Markdown We may want to show the counts for the two years side by side. In other words, we want a data cube with (name, sex) along one axis and year along the other. To do this, we can `.unstack()` the year from the index. Note this is similar to a reverse Melt operation that we talked about in class -- a more tidy data way to do this may be to setup year as a multi index. ###Code names.unstack("Year").head() ###Output _____no_output_____ ###Markdown The `NaN`s simply indicate that there were no children (more precisely, if you read [the documentation](https://www.ssa.gov/OACT/babynames/limits.html), fewer than five children) born in the United States in that year. In this case, it makes sense to fill these `NaN` values with 0. ###Code names.unstack().fillna(0).head() ###Output _____no_output_____ ###Markdown Merging (a.k.a. Joining)More commonly, the data sets that we want to combine actually contain different information about the same observations. In other words, instead of stacking the `DataFrame`s on top of each other, as in concatenation, we want to stack them next to each other. The process of combining columns or variables from different data sets is known as **merging** or **joining**.The observations in the two data sets may not be in the same order, so merging is not as simple as stacking the `DataFrame`s side by side. For example, the process might look as follows:![](../images/one-to-one.png)In _pandas_, merging is accomplished using the `.merge()` function. We have to specify the variable(s) that we want to match across the two data sets. For example, to merge the 1995 names with the 2015 names, we have to join on name and sex. ###Code names1995.merge(names2015, on=["Name", "Sex"]).head() ###Output _____no_output_____ ###Markdown The variables `Name` and `Sex` that we joined on each appear once in the resulting `DataFrame`. The variable `Count`, which we did not join on, appears twice---since there are columns called `Count` in both `DataFrame`s. Notice that `pandas` automatically appended the suffix `_x` to the name of the variable from the left data set and `_y` to the name from the right. We can customize the suffixes by specifying the `suffixes=` argument. ###Code names1995.merge(names2015, on=["Name", "Sex"], suffixes=("1995", "2015")).head() ###Output _____no_output_____ ###Markdown In the code above, we assumed that the columns that we joined on had the same names in the two data sets. What if they had different names? For example, suppose the columns had been lowercase in one and uppercase in the other. We can specify which variables to use from the left and right data sets using the `left_on=` and `right_on=` arguments. ###Code # Create new DataFrames where the column names are different names1995_lower = names1995.copy() names2015_upper = names2015.copy() names1995_lower.columns = names1995.columns.str.lower() names2015_upper.columns = names2015.columns.str.upper() # This is how you merge them. names1995_lower.merge( names2015_upper, left_on=("name", "sex"), right_on=("NAME", "SEX") ).head() ###Output _____no_output_____ ###Markdown Note that here we've managed to get some redundant columns so we would need to drop these to keep our data tidy! What if the "variables" that we want to join on are in the index? We can always call `.reset_index()` to make them columns, but we can also specify the arguments `left_index=True` or `right_index=True` to force `pandas` to use the index instead of columns. Note that if we were to use the Pandas `join` command the default action would be to join on the indicies. ###Code names1995_idx = names1995.set_index(["Name", "Sex"]) names1995_idx.head() names1995_idx.merge(names2015, left_index=True, right_on=("Name", "Sex")).head() ###Output _____no_output_____ ###Markdown Note that this worked because the left `DataFrame` had an index with two levels, which were joined to two columns from the right `DataFrame`. One-to-One and Many-to-One RelationshipsIn the example above, there was at most one (name, sex) combination in the 2015 data set for each (name, sex) combination in the 1995 data set. These two data sets are thus said to have a **one-to-one relationship**. Another example of a one-to-one data set is the Beatles example from above. Each Beatle appears in each data set exactly once, so the name is uniquely identifying.![](../images/one-to-one.png)However, two data sets need not have a one-to-one relationship. For example, a data set that specifies the instrument(s) that each Beatle played would potentially feature each Beatle multiple times (if they played multiple instruments). If we joined this data set to the "Beatles career" data set, then each row in the "Beatles career" data set would be mapped to several rows in the "instruments" data set. These two data sets are said to have a **many-to-one relationship**.![](../images/many-to-one.png) Many-to-Many Relationships: A Cautionary TaleIn the baby names data, the name is not uniquely identifying. For example, there are both males and females with the name "Jessie". ###Code jessie1995 = names1995[names1995["Name"] == "Jessie"] jessie2015 = names2015[names2015["Name"] == "Jessie"] jessie1995 ###Output _____no_output_____ ###Markdown That is why we have to be sure to join on both name and sex. But what would go wrong if we joined these two `DataFrame`s on just "Name"? Let's try it out: ###Code jessie1995.merge(jessie2015, on=["Name"]) ###Output _____no_output_____ ###Markdown We see that Jessie ends up appearing four times.- Female Jessies from 1995 are matched with female Jessies from 2015. (Good!)- Male Jessies from 1995 are matched with male Jessies from 2015. (Good!)- Female Jessies from 1995 are matched with male Jessies from 2015. (Huh?)- Male Jessies from 1995 are matched with female Jessies from 2015. (Huh?)The problem is that there were multiple Jessies in the 1995 data and multiple Jessies in the 2015 data. We say that these two data sets have a **many-to-many relationship**. Joining DataIn the previous section, we discussed how to _merge_ (or _join_) two data sets by matching on certain variables. But what happens when no match can be found for a row in one `DataFrame`? First, let's determine how _pandas_ handles this situation by default. The name "Nevaeh", which is "Heaven" spelled backwards, is said to have taken off when Sonny Sandoval of the band P.O.D. gave his daughter the name in 2000. Let's look at how common this name was four years earlier and four years after. ###Code names1996 = pd.read_csv("./data/names/yob1996.txt", header=None, names=["Name", "Sex", "Count"]) names2004 = pd.read_csv("./data/names/yob2004.txt", header=None, names=["Name", "Sex", "Count"]) names1996[names1996.Name == "Nevaeh"] names2004[names2004.Name == "Nevaeh"] ###Output _____no_output_____ ###Markdown In 1996, there were no girls (or fewer than 5) named Nevaeh; just eight years later, there were over 3000 girls (and 27 boys) with the name. It seems like Sonny Sandoval had a huge effect.What will happen to the name "Nevaeh" when we merge the two data sets? ###Code names = names1996.merge(names2004, on=["Name", "Sex"], suffixes=("1996", "2004")) names[names.Name == "Nevaeh"] ###Output _____no_output_____ ###Markdown By default, _pandas_ only includes combinations that are present in _both_ `DataFrame`s. If it cannot find a match for a row in one `DataFrame`, then the combination is simply dropped. But in this context, the fact that a name does not appear in one data set is informative. It means that no babies were born in that year with that name. (Technically, it means that fewer than 5 babies were born with that name, as any name that was assigned fewer than 5 times is omitted for privacy reasons.) We might want to include names that appeared in only one of the two `DataFrame`s, rather than just the names that appeared in both. There are four types of joins, distinguished by whether they include names from the left `DataFrame`, the right `DataFrame`, both, or neither:1. **inner join** (default): only values that are present in _both_ `DataFrame`s are included in the result2. **outer join**: any value that appears in _either_ `DataFrame` is included in the result3. **left join**: any value that appears in the _left_ `DataFrame` is included in the result, whether or not it appears in the right `DataFrame`4. **right join**: any value that appears in the _right_ `DataFrame` is included in the result, whether or not it appears in the left `DataFrame`.In _pandas_, the join type is specified using the `how=` argument.Now let's look at examples of each of these types of joins. ###Code # inner join names_inner = names1996.merge(names2004, on=["Name", "Sex"], how="inner", suffixes=("1996", "2004")) names_inner.head() # outer join names_outer = names1996.merge(names2004, on=["Name", "Sex"], how="outer", suffixes=("1996", "2004")) names_outer.head() ###Output _____no_output_____ ###Markdown Names like "Zyrell" and "Zyron" appeared in the 2004 data but not the 1996 data. For this reason, their count in 1996 is `NaN`. In general, there will be `NaN`s in a `DataFrame` resulting from an outer join. Any time a name appears in one `DataFrame` but not the other, there will be `NaN`s in the columns from the `DataFrame` whose data is missing. ###Code names_outer.isnull().sum() ###Output _____no_output_____ ###Markdown By contrast, there are no `NaN`s when we do an inner join. That is because we restrict to only the (name, sex) pairs that appeared in both `DataFrame`s, so we have counts for both 1996 and 2014. ###Code names_inner.isnull().sum() ###Output _____no_output_____ ###Markdown Left and right joins preserve data from one `DataFrame` but not the other. For example, if we were trying to calculate the percentage change for each name from 1996 to 2004, we would want to include all of the names that appeared in the 1996 data. If the name did not appear in the 2004 data, then that is informative. ###Code # left join names_left = names1996.merge(names2004, on=["Name", "Sex"], how="left", suffixes=("1996", "2004")) names_left.head() ###Output _____no_output_____ ###Markdown The result of a left join has `NaN`s in the column from the right `DataFrame`. ###Code names_left.isnull().sum() ###Output _____no_output_____ ###Markdown The result of a right join, on the other hand, has `NaN`s in the column from the left `DataFrame`. ###Code # right join names_right = names1996.merge(names2004, on=["Name", "Sex"], how="right", suffixes=("1996", "2004")) names_right.head() names_right.isnull().sum() ###Output _____no_output_____ ###Markdown One way to visualize the different types of joins is using Venn diagrams. The shaded circles specify which values are included in the output.![](../images/joins.jpeg) Exercises **Exercise 1.** Make a line plot showing the popularity of your name over the years. Make sure you include all the years in the dataset! You'll need to write some code to make sure you open **all** the year datafiles.(**BONUS Extra Credit (3 points)**: As an added challenge, try marking the year you were born with a graphic element.)(**BONUS Extra Credit (2 points)**: As an additional added challenge, also plot 3 friends or relatives along with their birthdays and sequences on the same graph.)(If you have a rare name that does not appear in the data set, choose a friend's name.) ###Code # TYPE YOUR CODE HERE ###Output _____no_output_____ ###Markdown Exercises 2-4 deal with the [Movielens data 1M Dataset](https://grouplens.org/datasets/movielens/1m/) which has been copied into the Github for this class. This dataset is a collection of movie ratings submitted by users. The information about the movies, ratings, and users are stored in three separate files, called `movies.dat`, `ratings.dat`, and `users.dat`. The column names are not included with the data files. Refer to the data documentation (`./data/movielens/README`) for the column names and how the columns correspond across the data sets.For the first part of this excersize you need to open these datafiles, make sure the column headders are correct, and merge them into a single DataFrame to answer the questions. Take note of the seperators in the data and maybe look at the documentation for [Pandas read_csv()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) for some hits. **Exercise 2.** Who's more generous with ratings: males or females? Calculate the average of the ratings given by male users, and the average of the ratings given by female users. ###Code # TYPE YOUR CODE HERE ###Output _____no_output_____ ###Markdown **Exercise 3.** Calculate the number of ratings for each of the movies. How many of the movies had zero ratings?(_Hint_: You may need to use operations on the ratings table first.)(_Hint_: Why is an inner join not sufficient here?) ###Code # TYPE YOUR CODE HERE ###Output _____no_output_____ ###Markdown **Exercise 4.** How many movies received both a 1 and a 5 rating? Do this by creating and joining two appropriate tables.(*Hint:* The [Pandas unique()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.unique.html) function may be nice here...) ###Code # TYPE YOUR CODE HERE ###Output _____no_output_____ ###Markdown **Exercise 5.** Among movies with at least 100 ratings, which movie had the highest average rating? (**Hint:** Try filtering the dataframe before using other commands.) ###Code # TYPE YOUR CODE HERE ###Output _____no_output_____ ###Markdown (**BONUS Extra Credit (8 points)**: For each movie, calculate the average age of the users who rated it and the average rating. Make a scatterplot showing the relationship between age and rating, with each point representing a movie. Use the size of each point to represent the number of users who rated the movie.(**BONUS Extra Credit (2 points)**: To this plot annotate at least two movies that you like in the graph. You can either make them a different color with a key or add a line and mark them. [This will really test your skill with MatPlotLib.](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.annotate.html) ###Code # TYPE YOUR CODE HERE ###Output _____no_output_____ ###Markdown Lab 05: Merging and Joining Data (10 Bonus Points!)This lab is presented with some revisions from [Dennis Sun at Cal Poly](https://web.calpoly.edu/~dsun09/index.html) and his [Data301 Course](http://users.csc.calpoly.edu/~dsun09/data301/lectures.html) When you have filled out all the questions, submit via [Tulane Canvas](https://tulane.instructure.com/) In many situtions, the information you need is spread across multiple data sets, so you will need to combine multiple data sets into one. In this chapter, we explore how to combine information from multiple (tabular) data sets.As a working example, we will use the baby names data collected by the Social Security Administration. Each data set in this collection contains the names of all babies born in the United States in a particular year. This data is [publicly available](https://www.ssa.gov/OACT/babynames/limits.html), and a copy has been made available at `../data/names.zip`.**Note:** You will need to unzip this data into the directory where you are working to complete this lab!**Note:** If you are on a windows machine the command below won't work quite right, don't worry about it! ###Code import os os.listdir(os.path.join("..","data","names")) ###Output _____no_output_____ ###Markdown As you can see this data is broken up into a lot of individual files, but if we want to use any of our `groupby` and other analysis techniques we need to make it into one file! ConcatenationSometimes, the _rows_ of data are spread across multiple files, and we want to combine the rows into a single data set. The process of combining rows from different data sets is known as **concatenation**. Visually, to concatenate two or more `DataFrame`s means to stack them on top of one another.For example, suppose we want to understand how the popularity of different names evolved between 1995 and 2015. The 1995 names and the 2015 names are stored in two different files: `yob1995.txt` and `yob2015.txt`, respectively. To carry out this analysis, we will need to combine these two data sets into one. ###Code %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np # These two things are for Pandas, #it widens the notebook and lets us display data easily. from IPython.core.display import display, HTML display(HTML("<style>.container { width:95% !important; }</style>")) # Show a ludicrus number of rows and columns pd.options.display.max_rows = 500 pd.options.display.max_columns = 500 pd.options.display.width = 1000 names1995 = pd.read_csv("../data/names/yob1995.txt", header=None, names=["Name", "Sex", "Count"]) names1995.head() names2015 = pd.read_csv("../data/names/yob2015.txt", header=None, names=["Name", "Sex", "Count"]) names2015.head() ###Output _____no_output_____ ###Markdown To concatenate the two, we use the `pd.concat()` function, which accepts a _list_ of `pandas` objects (`DataFrames` or `Series`) and concatenates them. ###Code pd.concat([names1995, names2015]) ###Output _____no_output_____ ###Markdown There are two problems with the combined data set above. First, there is no longer any way to distinguish the 1995 data from the 2015 data. To fix this, we can add a "Year" column to each `DataFrame` before we concatenate. Second, the indexes from the individual `DataFrame`s have been preserved. (To see this, observe that the last index in the `DataFrame` is 32,951, which corresponds to the number of rows in `names2015`, but there are actually 59,032 rows in the `DataFrame`.) That means that there are two rows with an index of 0, two rows with an index of 1, and so on. To force `pandas` to create a completely new index for this `DataFrame`, ignoring the indices from the individual `DataFrame`s, we specify `ignore_index=True`. ###Code names1995["Year"] = 1995 names2015["Year"] = 2015 names = pd.concat([names1995, names2015], ignore_index=True) names ###Output _____no_output_____ ###Markdown Now this is a `DataFrame` that we can use!Notice that the data is currently in tabular form, with one row per combination of name, sex, and year. It makes sense to set these to be the index of our `DataFrame`. ###Code names.set_index(["Name", "Sex", "Year"], inplace=True) names.head() ###Output _____no_output_____ ###Markdown We may want to show the counts for the two years side by side. In other words, we want a data cube with (name, sex) along one axis and year along the other. To do this, we can `.unstack()` the year from the index. Note this is similar to a reverse Melt operation that we talked about in class -- a more tidy data way to do this may be to setup year as a multi index. ###Code names.unstack("Year").head() ###Output _____no_output_____ ###Markdown The `NaN`s simply indicate that there were no children (more precisely, if you read [the documentation](https://www.ssa.gov/OACT/babynames/limits.html), fewer than five children) born in the United States in that year. In this case, it makes sense to fill these `NaN` values with 0. ###Code names.unstack().fillna(0).head() ###Output _____no_output_____ ###Markdown Merging (a.k.a. Joining)More commonly, the data sets that we want to combine actually contain different information about the same observations. In other words, instead of stacking the `DataFrame`s on top of each other, as in concatenation, we want to stack them next to each other. The process of combining columns or variables from different data sets is known as **merging** or **joining**.The observations in the two data sets may not be in the same order, so merging is not as simple as stacking the `DataFrame`s side by side. For example, the process might look as follows:![](../images/one-to-one.png)In _pandas_, merging is accomplished using the `.merge()` function. We have to specify the variable(s) that we want to match across the two data sets. For example, to merge the 1995 names with the 2015 names, we have to join on name and sex. ###Code names1995.merge(names2015, on=["Name", "Sex"]).head() ###Output _____no_output_____ ###Markdown The variables `Name` and `Sex` that we joined on each appear once in the resulting `DataFrame`. The variable `Count`, which we did not join on, appears twice---since there are columns called `Count` in both `DataFrame`s. Notice that `pandas` automatically appended the suffix `_x` to the name of the variable from the left data set and `_y` to the name from the right. We can customize the suffixes by specifying the `suffixes=` argument. ###Code names1995.merge(names2015, on=["Name", "Sex"], suffixes=("1995", "2015")).head() ###Output _____no_output_____ ###Markdown In the code above, we assumed that the columns that we joined on had the same names in the two data sets. What if they had different names? For example, suppose the columns had been lowercase in one and uppercase in the other. We can specify which variables to use from the left and right data sets using the `left_on=` and `right_on=` arguments. ###Code # Create new DataFrames where the column names are different names1995_lower = names1995.copy() names2015_upper = names2015.copy() names1995_lower.columns = names1995.columns.str.lower() names2015_upper.columns = names2015.columns.str.upper() # This is how you merge them. names1995_lower.merge( names2015_upper, left_on=("name", "sex"), right_on=("NAME", "SEX") ).head() ###Output _____no_output_____ ###Markdown Note that here we've managed to get some redundant columns so we would need to drop these to keep our data tidy! What if the "variables" that we want to join on are in the index? We can always call `.reset_index()` to make them columns, but we can also specify the arguments `left_index=True` or `right_index=True` to force `pandas` to use the index instead of columns. Note that if we were to use the Pandas `join` command the default action would be to join on the indicies. ###Code names1995_idx = names1995.set_index(["Name", "Sex"]) names1995_idx.head() names1995_idx.merge(names2015, left_index=True, right_on=("Name", "Sex")).head() ###Output _____no_output_____ ###Markdown Note that this worked because the left `DataFrame` had an index with two levels, which were joined to two columns from the right `DataFrame`. One-to-One and Many-to-One RelationshipsIn the example above, there was at most one (name, sex) combination in the 2015 data set for each (name, sex) combination in the 1995 data set. These two data sets are thus said to have a **one-to-one relationship**. Another example of a one-to-one data set is the Beatles example from above. Each Beatle appears in each data set exactly once, so the name is uniquely identifying.![](../images/one-to-one.png)However, two data sets need not have a one-to-one relationship. For example, a data set that specifies the instrument(s) that each Beatle played would potentially feature each Beatle multiple times (if they played multiple instruments). If we joined this data set to the "Beatles career" data set, then each row in the "Beatles career" data set would be mapped to several rows in the "instruments" data set. These two data sets are said to have a **many-to-one relationship**.![](../images/many-to-one.png) Many-to-Many Relationships: A Cautionary TaleIn the baby names data, the name is not uniquely identifying. For example, there are both males and females with the name "Jessie". ###Code jessie1995 = names1995[names1995["Name"] == "Jessie"] jessie2015 = names2015[names2015["Name"] == "Jessie"] jessie1995 ###Output _____no_output_____ ###Markdown That is why we have to be sure to join on both name and sex. But what would go wrong if we joined these two `DataFrame`s on just "Name"? Let's try it out: ###Code jessie1995.merge(jessie2015, on=["Name"]) ###Output _____no_output_____ ###Markdown We see that Jessie ends up appearing four times.- Female Jessies from 1995 are matched with female Jessies from 2015. (Good!)- Male Jessies from 1995 are matched with male Jessies from 2015. (Good!)- Female Jessies from 1995 are matched with male Jessies from 2015. (Huh?)- Male Jessies from 1995 are matched with female Jessies from 2015. (Huh?)The problem is that there were multiple Jessies in the 1995 data and multiple Jessies in the 2015 data. We say that these two data sets have a **many-to-many relationship**. Joining DataIn the previous section, we discussed how to _merge_ (or _join_) two data sets by matching on certain variables. But what happens when no match can be found for a row in one `DataFrame`? First, let's determine how _pandas_ handles this situation by default. The name "Nevaeh", which is "Heaven" spelled backwards, is said to have taken off when Sonny Sandoval of the band P.O.D. gave his daughter the name in 2000. Let's look at how common this name was four years earlier and four years after. ###Code names1996 = pd.read_csv("../data/names/yob1996.txt", header=None, names=["Name", "Sex", "Count"]) names2004 = pd.read_csv("../data/names/yob2004.txt", header=None, names=["Name", "Sex", "Count"]) names1996[names1996.Name == "Nevaeh"] names2004[names2004.Name == "Nevaeh"] ###Output _____no_output_____ ###Markdown In 1996, there were no girls (or fewer than 5) named Nevaeh; just eight years later, there were over 3000 girls (and 27 boys) with the name. It seems like Sonny Sandoval had a huge effect.What will happen to the name "Nevaeh" when we merge the two data sets? ###Code names = names1996.merge(names2004, on=["Name", "Sex"], suffixes=("1996", "2004")) names[names.Name == "Nevaeh"] ###Output _____no_output_____ ###Markdown By default, _pandas_ only includes combinations that are present in _both_ `DataFrame`s. If it cannot find a match for a row in one `DataFrame`, then the combination is simply dropped. But in this context, the fact that a name does not appear in one data set is informative. It means that no babies were born in that year with that name. (Technically, it means that fewer than 5 babies were born with that name, as any name that was assigned fewer than 5 times is omitted for privacy reasons.) We might want to include names that appeared in only one of the two `DataFrame`s, rather than just the names that appeared in both. There are four types of joins, distinguished by whether they include names from the left `DataFrame`, the right `DataFrame`, both, or neither:1. **inner join** (default): only values that are present in _both_ `DataFrame`s are included in the result2. **outer join**: any value that appears in _either_ `DataFrame` is included in the result3. **left join**: any value that appears in the _left_ `DataFrame` is included in the result, whether or not it appears in the right `DataFrame`4. **right join**: any value that appears in the _right_ `DataFrame` is included in the result, whether or not it appears in the left `DataFrame`.In _pandas_, the join type is specified using the `how=` argument.Now let's look at examples of each of these types of joins. ###Code # inner join names_inner = names1996.merge(names2004, on=["Name", "Sex"], how="inner", suffixes=("1996", "2004")) names_inner.head() # outer join names_outer = names1996.merge(names2004, on=["Name", "Sex"], how="outer", suffixes=("1996", "2004")) names_outer.head() ###Output _____no_output_____ ###Markdown Names like "Zyrell" and "Zyron" appeared in the 2004 data but not the 1996 data. For this reason, their count in 1996 is `NaN`. In general, there will be `NaN`s in a `DataFrame` resulting from an outer join. Any time a name appears in one `DataFrame` but not the other, there will be `NaN`s in the columns from the `DataFrame` whose data is missing. ###Code names_outer.isnull().sum() ###Output _____no_output_____ ###Markdown By contrast, there are no `NaN`s when we do an inner join. That is because we restrict to only the (name, sex) pairs that appeared in both `DataFrame`s, so we have counts for both 1996 and 2014. ###Code names_inner.isnull().sum() ###Output _____no_output_____ ###Markdown Left and right joins preserve data from one `DataFrame` but not the other. For example, if we were trying to calculate the percentage change for each name from 1996 to 2004, we would want to include all of the names that appeared in the 1996 data. If the name did not appear in the 2004 data, then that is informative. ###Code # left join names_left = names1996.merge(names2004, on=["Name", "Sex"], how="left", suffixes=("1996", "2004")) names_left.head() ###Output _____no_output_____ ###Markdown The result of a left join has `NaN`s in the column from the right `DataFrame`. ###Code names_left.isnull().sum() ###Output _____no_output_____ ###Markdown The result of a right join, on the other hand, has `NaN`s in the column from the left `DataFrame`. ###Code # right join names_right = names1996.merge(names2004, on=["Name", "Sex"], how="right", suffixes=("1996", "2004")) names_right.head() names_right.isnull().sum() ###Output _____no_output_____ ###Markdown One way to visualize the different types of joins is using Venn diagrams. The shaded circles specify which values are included in the output.![](../images/joins.jpeg) Exercises **Exercise 1.** Make a line plot showing the popularity of your name over the years. Make sure you include all the years in the dataset! You'll need to write some code to make sure you open **all** the year datafiles.(**BONUS Extra Credit (2 points)**: As an added challenge, try marking the year you were born with a graphic element.)(**BONUS Extra Credit (2 points)**: As an additional added challenge, also plot 3 friends or relatives along with their birthdays and sequences on the same graph.)(If you have a rare name that does not appear in the data set, choose a friend's name.) ###Code data_M = [] data_F = [] columns = [] initString = 'yob' df_names = pd.read_csv('../data/names/yob1880.txt', header=None, names=["Name", "Sex", "Count"]) df_names.set_index(["Name","Sex"], inplace=True) df_names['Year'] = 1880 for i in range(1881,2019): pathSTR = '../data/names/'+initString+str(i)+'.txt' new_data = pd.read_csv(pathSTR, header=None, names=["Name", "Sex", "Count"]) new_data.set_index(["Name","Sex"], inplace=True) new_data['Year'] = i df_names = pd.concat([df_names, new_data]) df_names #remove index from df_names df_help = df_names.reset_index() df_help.set_index("Name") df_help[df_help['Name']=='Max'] df_help[(df_help['Name']=='Max')].pivot_table(index=["Year"],columns=['Sex'],values='Count').plot.line() ###Output _____no_output_____ ###Markdown Exercises 2-4 deal with the [Movielens data 1M Dataset](https://grouplens.org/datasets/movielens/1m/) which has been copied into the Github for this class. This dataset is a collection of movie ratings submitted by users. The information about the movies, ratings, and users are stored in three separate files, called `movies.dat`, `ratings.dat`, and `users.dat`. The column names are not included with the data files. Refer to the data documentation (`./data/movielens/README`) for the column names and how the columns correspond across the data sets.For the first part of this excersize you need to open these datafiles, make sure the column headders are correct, and merge them into a single DataFrame to answer the questions. Take note of the seperators in the data and maybe look at the documentation for [Pandas read_csv()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) for some hits. **Exercise 2.** Who's more generous with ratings: males or females? Calculate the average of the ratings given by male users, and the average of the ratings given by female users. ###Code user_df = pd.read_csv('../data/ml-1m/users.dat', header=None, names=["UserID","Gender","Age","Occupation","Zip-code"],sep='::') rating_df = pd.read_csv('../data/ml-1m/ratings.dat', header=None, names=["UserID","MovieID","Rating","Timestamp"],sep='::') movie_df = pd.read_csv('../data/ml-1m/movies.dat', header=None, names=["MovieID","Title","Genres"],sep='::',encoding='latin-1') #found on stackoverflow due to some characters in file DB_df = user_df.merge(rating_df, on=['UserID'],how='outer') DB_df = DB_df.merge(movie_df, on=['MovieID'],how='outer') DB_df.groupby("Gender").Rating.mean() #On average, Females are more genrous with ranks than Males. ###Output C:\Users\MSend\anaconda3\envs\Capstone\lib\site-packages\pandas\util\_decorators.py:311: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators (separators > 1 char and different from '\s+' are interpreted as regex); you can avoid this warning by specifying engine='python'. return func(*args, **kwargs) ###Markdown **Exercise 3.** Calculate the number of ratings for each of the movies. How many of the movies had zero ratings?(_Hint_: You may need to use operations on the ratings table first.)(_Hint_: Why is an inner join not sufficient here?) ###Code rating_df.merge(movie_df, on=['MovieID'],how='right').isnull().sum() #len(inithelp[inithelp.Rating==True]) #double checking #177 movies had zero ratings ###Output _____no_output_____ ###Markdown **Exercise 4.** How many movies received both a 1 and a 5 rating? Do this by creating and joining two appropriate tables.(*Hint:* The [Pandas unique()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.unique.html) function may be nice here...) ###Code new_df = rating_df.merge(movie_df, on=['MovieID'],how='outer') new_df.groupby("MovieID").Rating.unique().isin([1,5]).sum() # Ten movies received both a 1 and 5 rating. ###Output _____no_output_____ ###Markdown **Exercise 5.** Among movies with at least 100 ratings, which movie had the highest average rating? (**Hint:** Try filtering the dataframe before using other commands.) ###Code new_df = rating_df.merge(movie_df, on=['MovieID'],how='left') help_df = new_df.set_index("Title") over100 = help_df[new_df.Title.value_counts()>100] over100.groupby("Title").Rating.mean().sort_values(ascending=False).iloc[[0]] #Movie with highest average rating is "Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) with 4.56051" ###Output C:\WINDOWS\TEMP/ipykernel_1620/3160856314.py:3: UserWarning: Boolean Series key will be reindexed to match DataFrame index. over100 = help_df[new_df.Title.value_counts()>100] ###Markdown **BONUS BONUS 8 POINTS.** For each movie, calculate the average age of the users who rated it and the average rating. Make a scatterplot showing the relationship between age and rating, with each point representing a movie. Use the size of each point to represent the number of users who rated the movie.**BONUS Extra Credit (2 points)**: To this plot annotate at least two movies that you like in the graph. You can either make them a different color with a key or add a line and mark them. This will really test your skill with MatPlotLib. ###Code #For each movie, calculate the average age of the users who rated it and the average rating. age = DB_df.groupby("Title").Age.mean().to_frame().reset_index().Age rating = DB_df.groupby("Title").Rating.mean().to_frame().reset_index().Rating title = pd.DataFrame(DB_df.Title.unique(),columns=['Title']) title['age'] = age title['rating'] = rating plt.scatter(age,rating) ###Output _____no_output_____
lessons/pydata/univariate/index.ipynb
###Markdown Jak je vidět, máme v datasetu přesně polovinu mužů a žen. Vizualizace Je velmi důležité umět porozumět popisným statistikám, které nám usnadní pochopení dat, na které se díváme. Neodmyslitelným doplňkem k tomu jsou grafy, které mohou rozkrýt další skryté vlastnosti či objasnit v číslech skryté informace.Všechny naše grafy na pozadí produkuje knihovna matplotlib, ale při jejich tvorbě si po většinu času vystačíme s metodou `plot()`, kterou nám dává k dispozici pandas. Až později, když budeme chtít grafy upravovat a různě kombinovat, budeme potřebovat matplotlib použít přímo.Aby vše fungovalo správně a využili jsme výhod jupyteru, speciálním příkazem nastavíme, aby se grafy zobrazovaly přímo v notebooku. ###Code %matplotlib inline ###Output _____no_output_____ ###Markdown Histogram Mezi nejběžnější grafy datové analýzy patří histogramy. Histogram má na vodorovné ose rozprostřeny hodnoty z daného sloupce tabulky a výška každého sloupce nám ukazuje, kolikrát je daná hodnota zastoupena v datech. Pro přehlednost nemá každá jednotlivá hodnota svůj sloupec, ale jsou ve skupinách. Pojďme si jeden nakreslit. ###Code data.vaha.plot(kind="hist", bins=40); ###Output _____no_output_____ ###Markdown Na předchozím řádku se děje hned několi věcí najednou, tak si je pojďme rozebrat. Metoda `plot` umí ze sloupce či celé tabulky vytvořit graf. Jaký graf to bude, o tom rozhodje pojmenovaný argument `kind`. Do kolika sloupců se má histogram rozčlenit, to je nastaveno pojmenovaným argumentem `bins`.Středník na konci řádku zabrání, aby se kromě grafu vypsala i neužitečná informace ve stylu ``.Z histogramu je vidět, že místo očekávaného průměru, který bude mít v datasetu nejvíce zástupců, máme něco jako průměry dva. To může být způsobeno tím, že máme v záznamech polovinu žen a druhou mužů. Zkusme se na tyto dvě skupiny podívat zvlášť. ###Code data[data.pohlavi == "Muž"].vaha.plot.hist(bins=30); data[data.pohlavi == "Žena"].vaha.plot.hist(bins=30); ###Output _____no_output_____ ###Markdown Díky filtraci řádku před vykreslením histogramu jsme získali dva samostatné histogramy - jeden pro každé pohlaví. Za povšimnutí také stojí, že jsme zde místo pojmenovaného argumentu `kind` zvolili volání metody `plot.hist`, které funguje naprosto stejně, ale může někomu více vyhovovat.Z histogramů je patrné, že obě pohlaví mají nějakou průměrnou váhu, která je v datech zastoupena nejvíce záznamy. Je třeba se mít na pozoru, protože i když oba histogramy vypadají dost podobně, jejich vodorovná osa obsahuje zcela odlišné hodnoty a tak zatímco mužů pod 50 kg váhy je v datasetu zanedbatelné množství, u žen je to celkem početná část.Chceme-li měnit různá nastavení grafu je možné použít objektově-orientované rozhraní knihovny `matplotlib`, které nám umožní s vizualizacemi všemožně manipulovat. Proto si jej musíme nejdříve importovat. ###Code from matplotlib import pyplot as plt fig = plt.figure(figsize=(10, 5)) ax = fig.add_subplot(1, 1, 1) data[data.pohlavi == "Žena"].vaha.plot.hist(ax=ax, bins=30); ax.set_title("Histogram váhy žen"); ax.set_xlabel("Váha v kilogramech"); ###Output _____no_output_____ ###Markdown Voláním `plt.figure` vytvoříme kontejner pro celý obrázek dané velikosti (v palcích). `fig.add_subplot(1, 1, 1)` nám do tohoto obrázku přidá osy pro budoucí graf. Tři jedničky znamenají, že graf má být v pomyslné tabulce s jedním řádkem a jedním sloupcem na prvním místě.Díky tomu, že objekty reprezentující celý obrázek a graf v něm máme v samostatných proměnných, můžeme se k nim kdykoli vrátit a nastavit libovolné vlastnosti. Tady se nám pro přehlednost hodí nastavit titulek pro graf a popis osy X.Vykreslit více histogramů do jednoho místa také není problém, stačí pandasu říci, aby pro kreslení histogramu využil připravené místo v obrázku. ###Code fig = plt.figure(figsize=(10, 5)) ax = fig.add_subplot() # Tři jedničky není třeba psát pokaždé znovu data[data.pohlavi == "Žena"].vaha.plot.hist(ax=ax, bins=30); data[data.pohlavi == "Muž"].vaha.plot.hist(ax=ax, bins=30); ax.set_title("Histogram váhy mužů a žen"); ax.set_xlabel("Váha v kilogramech"); ###Output _____no_output_____ ###Markdown Sice jsme tímto spojením histogramů přišli o kousek informace, protože nevidíme jak jsou zastoupeny váhově nadprůměrné ženy, ale zase jsme získali lepší přehled o tom, jak je na tom každá ze skupin co se průměru týče.Díky možnosti vkládat vícero grafů do jednoho obrázku získáme i možnost je mezi sebou snáze porovnávat. Můžeme si zkusit vykreslit histogramy vedle sebe i pod sebou. ###Code fig = plt.figure(figsize=(10, 5)) ax1 = fig.add_subplot(2, 1, 1) # Dva řádky, jeden sloupec, první graf ax2 = fig.add_subplot(2, 1, 2, sharex=ax1) # Druhý graf, sdílená osa X data[data.pohlavi == "Žena"].vyska.plot.hist(ax=ax1, bins=30); data[data.pohlavi == "Muž"].vyska.plot.hist(ax=ax2, bins=30); fig.suptitle("Histogram výšky mužů a žen"); ax2.set_xlabel("Výška v centimetrech"); ###Output _____no_output_____ ###Markdown `ax1` a `ax2` obsahují připravené osy pro oba histogramy a při jejich vytváření jsme je jednak rozmístili do obrázku o dvou řádcích a jednom sloupci a také jsme nastavili, aby druhý graf sdílel osu X s prvním grafem, což nám velmi usnadní jejich porovnání.Velmi podobné je to s dvěma sloupci a sdílenou osou Y. ###Code fig = plt.figure(figsize=(10, 5)) ax1 = fig.add_subplot(1, 2, 1) # Jeden řádek, dva sloupce, první graf ax2 = fig.add_subplot(1, 2, 2, sharey=ax1) # Druhý graf, sdílená osa Y data[data.pohlavi == "Žena"].vyska.plot.hist(ax=ax1, bins=30); data[data.pohlavi == "Muž"].vyska.plot.hist(ax=ax2, bins=30); fig.suptitle("Histogram výšky mužů a žen"); ax1.set_xlabel("Výška v centimetrech"); ax2.set_xlabel("Výška v centimetrech"); ###Output _____no_output_____ ###Markdown Jak je vidět, histogramy pod sebou se sdílenou osou X mají v tomto případě daleko vyšší vypovídající hodnotu, protože je z nich dobře poznat distribuce výšky u obou pohlaví. Sloupcový graf Ze sloupce pohlaví si jednoduše histogram vykreslit nemůžeme, protože tento sloupec neobsahuje číselné hodnoty. My už ale víme, jak ze sloupce s kategoriální proměnnou číselné hodnoty získat a tak stačí tyto dva přístupy skombinovat a výsledek vykreslit do sloupcového grafu. ###Code data.pohlavi.value_counts().plot.bar(); ###Output _____no_output_____ ###Markdown Sloupcový graf je zde jen pro úplnost, abychom si graficky zobrazili všechny sloupce. O jeho možnostech, kterých je opravdu velké množství, bude řeč později. Krabicový grafPosledním typem grafu, který si dnes ukážeme, je krabicový graf neboli boxplot. O co menší je jeho popularita u laické veřejnosti o to více infomací obsahuje pro zkušené analytiky. Jeho intepretace není vždy triviální, ale za to nabízí opravdu hodně informací v jednom obrázku. ###Code data.plot.box(figsize=(10, 10)); ###Output _____no_output_____ ###Markdown V krabicovém grafu vidíme vyobrazeny obě numerické proměnné najednou. Zelená čára uprostřed označuje medián. Oblast označená obdelníkem (krabicí) označuje rozsah mezi 25% a 75% percentily (mezi prvním a třetím kvartilem). Krátké vodorovné čárky označují rozsah definovaný vzorcem 1,5 × IQR, kde IQR je tzv. inter-quartile range tedy rozsah od prvního po třetí kvartil a vypočte se jako Q3 - Q1. Co se do tohoto rozsahu nevejde, je označeno puntíkem a znamená to, že tyto hodnoty jsou brány jako odlehlá měření. Je to tedy to podstatné z popisné statistiky v kostce (krabici).Abychom si to ukázali i prakticky na konkrétních číslech. IQR pro váhu je 84,9 (Q3) - 61,6 (Q1) = 23,3. Jeden a půl násobek IQR je 34.95. Rozsah pro vodorovné čárky je tedy 84,9 + 34,95 = 119,85 kg na horní hranici a 61,6 - 34,95 = 26,65 kg na spodní hranici. V grafu i v tabulce je vidět, že pouze jediná hodnota se tomuto rozsahu vymyká a to je váha 122,47 kg.> Za povšimnutí stojí, že některé parametry grafu lze nastavit i přímo jako pojmenované argumenty některé k `plot` metod a ušetřit si tak další volání různých modifikací.Zkusme si teď vykreslit podobné porování hodnot váhy pro muže a ženy v krabicovém grafu. Krabicový graf je pro podobná porovnání jako stvořený, ale bohužel ne ve své obyčejnější variantě s metodou `plot.box`. Daleko mocnější je metoda `boxplot`, která umožní grafu nastavit pojmenovaným argumentem `by` sloupec, podle kterého se mají záznamy dělit do skupin a pomocí argumentu `column` vybrat jen ten sloupec, který nás zajímá.> Proč je tomu tak a existují dvě metody na stejnou práci bohužel netuším. V nástrojích na datovou analytiku se často setkáš s přístupem, kdy jeden problém lze řešit mnoha různými způsoby a je jen na tobě, který si vybereš a oblíbíš. ###Code data.boxplot(column="vaha", by="pohlavi", figsize=(10, 10)); ###Output _____no_output_____ ###Markdown Explorační datová analýza Explorační datová analýza (zkráceně EDA) je soubor technik a metod, které se používají k hledání zajímavých informací v datech a tvorbě hypotéz, které je následně možné testovat. EDA je často úplně první krok, který se s daty provádí a který do dalších analýz přináší nejen hodně užitečných poznatků o datech, ale také data samotná připravená k dalšímu zpracování.Dnes se společně podíváme na to, jak správně data načíst, odhalit základní chyby, zobrazit souhrné informace a následně provedeme analýzu jednotlivých proměnných a dojde i na základní vizualizace.Data k analýze jsou připravena v tabulce [vaha-vyska.csv](static/vaha-vyska.csv). ###Code import pandas as pd ###Output _____no_output_____ ###Markdown Načtení a kontrola dat Opět máme data ve formátu CSV a tak si je načteme ze souboru funkcí `read_csv`. ###Code data = pd.read_csv("static/vaha-vyska.csv") ###Output _____no_output_____ ###Markdown Načtení se zřejmě povedlo, ale jistotu získáme, až když si data prohlédneme. ###Code data ###Output _____no_output_____ ###Markdown Výsledek sice vypadá jako tabulka, ale zcela v pořádku není. Jak je vidět, máme hodnoty výšky a váhy i v prvním řádku, kde bychom spíše očekávali názvy sloupců. Občas se stane, že názvy sloupců nejsou přímo v CSV souboru, ale jsou dodávány samostatně v odděleném dokumentu často i s vysvětlivkami, co jednotlivé sloupce znamenají a jaké hodnoty obsahují.V tomto případě je snadné odhadnout, že první sloupec bude obsahovat pohlaví, druhý výšku a třetí váhu, ale je samozřejmě lepší se o tom vždy přesvědčit u zdroje dat.Pojďme tedy načíst data znovu a názvy sloupců dodat ručně. ###Code data = pd.read_csv("static/vaha-vyska.csv", names=["pohlavi", "vyska", "vaha"]) data ###Output _____no_output_____ ###Markdown K dispozici máme tabulku s deseti tisíci záznamy. Zatím si ale nemůžeme být jisti, že všechny záznamy obsahují všechny hodnoty. Bývá dobrým zvykem se podívat, kolik je v datasetu tzv. nulových hodnot. Neméně užitečnou informací pro nás je, zda Pandas správně rozeznal datové typy jednotlivých proměnných, o kterých ze zdrojového CSV nedostane žádnou informaci a tak je musí odhadnout z obsahu sloupců. Oboje se dozvíme s pomocí metody `info`.> V dalších lekcích se společně podíváme, jak se s nulovými hodnotami vypořádat. ###Code data.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 10000 entries, 0 to 9999 Data columns (total 3 columns): pohlavi 10000 non-null object vyska 10000 non-null object vaha 10000 non-null float64 dtypes: float64(1), object(2) memory usage: 234.5+ KB ###Markdown Kromě informace o nulových hodnotách a datových typech jsme se ještě dozvěděli, kolik naše tabulka zabírá v paměti a jaký používá index.> Datové typy jsou podobné těm v Pythonu, ale přeci jen se mírně odlišují. Jejich kompletní seznam je možné nalézt v dokumentaci k [NumPy](https://numpy.org/devdocs/user/basics.types.htmldata-types).Ti pozorní si všimnou, že i když očekáváme ve sloupci s váhou a výškou stejný datový typ (desetinné číslo neboli `float`), je výška označena jako `object`. Ti ještě pozornější už zahlédli i důvod - desetinná čárka místo desetinné tečky. Zatímco sloupec váha obsahuje desetinné tečky a je tedy správně identifikován jako číselný, sloupec s výškou obsahuje desetinné čárky a tak jej Pandas považuje za obecný objekt. Opravit takovou chybu není nijak složité.> V praxi se příliš často nestává, že by dva sloupce v jednom datasetu používaly různé znaky jako oddělovač desetinných míst. Pokud je tento problém v celé tabulce, stačí dát funkci `read_csv` pojmenovaný argument `decimal` a Pandas se o převod na desetinná čísla postará sám. ###Code data.vyska = data.vyska.str.replace(",", ".").astype(float) ###Output _____no_output_____ ###Markdown V příkazu výše se děje hned několik věcí najednou. Nejdříve na sloupci výška zavoláme metodu `str.replace`, která nám zamění všechny čárky za tečky a následně nám tento pozměněný sloupec metoda `astype` převede na sloupec s desetinnými čísly. Abychom docílili změny v tabulce, uložíme opravený sloupec zpět pod jeho původní jméno. ###Code data.info() data ###Output _____no_output_____ ###Markdown Prozatím jsme se dívali na data z pohledu jejich úplnosti a správnosti, ale zatím o nich nic moc nevíme. Pojďme se nejdříve podívat na základní typy proměnných a pak na popisnou statistiku, kterou pro ně můžeme použít. Typy proměnnýchProměnné se dají rozdělit do mnoha kategorií podle svých vlastností. My si popíšeme jen několik z nich, které nám pomohou se vyvarovat některým chybám a lépe se dorozumět při komunikaci s ostatními analytiky. Kategoriální proměnnáKategoriální proměnná obsahuje informaci o kategorii, do které lze daný záznam zařadit - např. barva, typ auta či typ elektrické zásuvky. Důležitou vlastností kategoriálních proměnných je to, že je nelze porovnávat. Můžeme jen říci, zda jsou stejné nebo různé, ale už ne která hodnota je větší či menší. Numerické proměnnéNumerická proměnná je označení pro více druhů proměnných, které lze vyjádřit číslem, porovnávat a provádět s nimi matematické operace. Taková proměnná může být diskrétní (celočíselná) jako např. počet válců automobilu, počet návštěvníků na oslavě, nebo spojitá (metrická) jako např. tlak, teplota či rychlost. Ordinální proměnnáOrdinální proměnná je taková, u které dává smysl rozhodovat o pořadí hodnot - např. úroveň dosaženého vzdělání, příjmová skupina atp.Důležité je si uvědomit, že i kategoriální proměnná může být v datech označena číslem stejně jako ordinální proměnná může být označena slovním popisem. Je tedy vždy důležité k datům přistupovat podle jejich obsahu spíše než formy.V našem datasetu na nás čeká jedna kategoriální proměnná (pohlaví) a dvě numerické spojité (váha a výška). Pojďme se podívat, co je nám o nich Pandas schopen říci. Základní popisné statistiky ###Code data.describe() ###Output _____no_output_____ ###Markdown Metoda `describe` vezme všechny číselné sloupce tabulky a vypočítá pro ně několik základních statistických údajů. Pojďme si je společně popsat jeden po druhém. Počet, průměr, minimum a maximumPočet (count) udává celkový počet hodnot v daném sloupci. Je to tedy další způsob, jak se ujistit, že sloupce neobsahují nulové hodnoty.Průměr (mean) obsahuje aritmetický průměr - tedy součet všech hodnot podělen jejich počtem.Minimum a maximum obsahují minimální resp. maximální hodnotu pro daný sloupec. PercentilyPercentily nám umožňují dělit hodnoty ve sloupcích podle jejich zastoupení. V tomto případě se jedná o 25%, 50% a 75% percentily, které nám dělí hodnoty ve sloupcích na čtyři části.Z tohoto dělení se dá vyčíst, že 25 % lidí z našeho datasetu je menších než 161,3 cm a lehčích než 61,6 kg. Z druhého konce je možné odvodit podobnou informaci a tedy, že 25 % lidí z našeho datasetu je vyšších než 175,7 cm a těžších než 84,9 kg.> Protože máme pomocí percentilů rozdělen dataset na čtyři části, říká se jim též kvartily. V případě potřeby si pojmenovaným argumentem `percentiles` metody `describe` je možné zadat vlastní percentily. MediánMedián sice není v tabulce přímo zapsán, ale je to jen jiné označení pro 50% percentil.Je to velmi důležité číslo, protože nám udává, že polovina hodnot je pod ním a druhá polovina nad ním. O rozdílu mezi průměrem a mediánem ještě bude řeč. Standardní ochylkaStandardní (jinak též směrodatná) odchylka (standard deviation, std) je číslo označované malým řeckým písmenem sigma (σ), které nám říká, jak moc se typické hodnoty ve sloupci liší od průměru. Čím vyšší je směrodatná odchylka tím větší je rozptyl hodnot ve sloupci a naopak čím nižší je, tím blíže jsou jednotlivé hodnoty průměru.Pokud to teď nedává úplně smysl, nevadí, brzy si význam těchto hodnot ukážeme na praktických příkladech s využitím grafů. Důležitý rozdíl mezi průměrem a mediánemPojďme si zkusit demonstrovat důležitý rozdíl mezi průměrem a mediánem a co nám o datech může povědět pouhý pohled na tato dvě čísla.Pro potřeby této ukázky si vybereme deset náhodných mužů z našeho datasetu.> Za normálních okolností bychom pro náhodný výběr použili metodu `sample`, ale zde potřebujeme, aby byl výsledek reprodukovatelný při opětovém spuštění notebooku, a tak zadáme náhodně vybrané řádky ručně. ###Code vyber = data.loc[[4768, 2549, 2325, 335, 3237, 736, 3178, 3721, 3711, 2246]] vyber vyber.describe() ###Output _____no_output_____ ###Markdown Nejmenší člen naší party měří 165 cm a váží skoro 71 kg. Na opačném konci je muž vážící skoro 97 kg při 189 cm výšky. Přůměrná výška je 175,74 cm a směrodatná odchylka výšky je 7,36 cm. Průměr a medián jsou velmi podobné hodnoty. Teď si k naší skupince přisedne nějaký obr. ###Code vyber.loc[10001] = "Muž", 251, 610 # Výška nejvyššího a váha nejtěžšího muže světa vyber vyber.describe() ###Output _____no_output_____ ###Markdown Stačil jeden obézní velikán a průměrná výška nám stoupla o 6,8 cm a váha o skoro 48 kg. Společně s průměrem nám do nebes vyletěla i směrodatná odchylka, ale co medián (50% percentil)? Ten zůstal skoro nezměněn. Společně se směrodatnou odchylkou nám totiž velký rozdíl mezi mediánem a průměrem může napovědět, že se v našich datech nachází tzv. odlehlá měření - tedy hodnoty, které jsou od průměru velmi vzdálené.> Jak se s těmito odlehlými měřeními vyrovnat, si ukážeme později, teď nám stačí vědět, že je diky znalostem základní popisné statistiky dokážeme identifikovat, aniž bychom museli pročítat všechny záznamy. A co kategoriální proměnná pohlaví? K té se tolik užitečných čísel nedozvíme, ale pár jich přeci jen bude. Tak například, kolik unikátních hodnot tento sloupec obsahuje? ###Code data.pohlavi.nunique() ###Output _____no_output_____ ###Markdown Metoda `nunique` nám vrátí počet unikátních hodnot ve sloupci pohlaví. Dvojka je na tomto místě očekávaný výsledek. Pokud by to číslo bylo jiné, mohlo by to znamenat, že je ve jméně některé z kategorií na některých řádcích překlep, který se často objevuje u ručně sbíraných dat. Nezbývá než se podívat, kolik zástupců od každého pohlaví naše data obsahují. K tomu použijeme metodu `value_counts()`. ###Code data.pohlavi.value_counts() ###Output _____no_output_____ ###Markdown Explorační datová analýza Explorační datová analýza (zkráceně EDA) je soubor technik a metod, které se používají k hledání zajímavých informací v datech a tvorbě hypotéz, které je následně možné testovat. EDA je často úplně první krok, který se s daty provádí a který do dalších analýz přináší nejen hodně užitečných poznatků o datech, ale také data samotná připravená k dalšímu zpracování.Dnes se společně podíváme na to, jak správně data načíst, odhalit základní chyby, zobrazit souhrné informace a následně provedeme analýzu jednotlivých proměnných a dojde i na základní vizualizace.Data k analýze jsou připravena v tabulce [vaha-vyska.csv](static/vaha-vyska.csv). ###Code import pandas as pd ###Output _____no_output_____ ###Markdown Načtení a kontrola dat Opět máme data ve formátu CSV a tak si je načteme ze souboru funkcí `read_csv`. ###Code data = pd.read_csv("static/vaha-vyska.csv") ###Output _____no_output_____ ###Markdown Načtení se zřejmě povedlo, ale jistotu získáme, až když si data prohlédneme. ###Code data ###Output _____no_output_____ ###Markdown Výsledek sice vypadá jako tabulka, ale zcela v pořádku není. Jak je vidět, máme hodnoty výšky a váhy i v prvním řádku, kde bychom spíše očekávali názvy sloupců. Občas se stane, že názvy sloupců nejsou přímo v CSV souboru, ale jsou dodávány samostatně v odděleném dokumentu často i s vysvětlivkami, co jednotlivé sloupce znamenají a jaké hodnoty obsahují.V tomto případě je snadné odhadnout, že první sloupec bude obsahovat pohlaví, druhý výšku a třetí váhu, ale je samozřejmě lepší se o tom vždy přesvědčit u zdroje dat.Pojďme tedy načíst data znovu a názvy sloupců dodat ručně. ###Code data = pd.read_csv("static/vaha-vyska.csv", names=["pohlavi", "vyska", "vaha"]) data ###Output _____no_output_____ ###Markdown K dispozici máme tabulku s deseti tisíci záznamy. Zatím si ale nemůžeme být jisti, že všechny záznamy obsahují všechny hodnoty. Bývá dobrým zvykem se podívat, kolik je v datasetu tzv. nulových hodnot. Neméně užitečnou informací pro nás je, zda Pandas správně rozeznal datové typy jednotlivých proměnných, o kterých ze zdrojového CSV nedostane žádnou informaci a tak je musí odhadnout z obsahu sloupců. Oboje se dozvíme s pomocí metody `info`.> V dalších lekcích se společně podíváme, jak se s nulovými hodnotami vypořádat. ###Code data.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 10000 entries, 0 to 9999 Data columns (total 3 columns): pohlavi 10000 non-null object vyska 10000 non-null object vaha 10000 non-null float64 dtypes: float64(1), object(2) memory usage: 234.5+ KB ###Markdown Kromě informace o nulových hodnotách a datových typech jsme se ještě dozvěděli, kolik naše tabulka zabírá v paměti a jaký používá index.> Datové typy jsou podobné těm v Pythonu, ale přeci jen se mírně odlišují. Jejich kompletní seznam je možné nalézt v dokumentaci k [NumPy](https://numpy.org/devdocs/user/basics.types.htmldata-types).Ti pozorní si všimnou, že i když očekáváme ve sloupci s váhou a výškou stejný datový typ (desetinné číslo neboli `float`), je výška označena jako `object`. Ti ještě pozornější už zahlédli i důvod - desetinná čárka místo desetinné tečky. Zatímco sloupec váha obsahuje desetinné tečky a je tedy správně identifikován jako číselný, sloupec s výškou obsahuje desetinné čárky a tak jej Pandas považuje za obecný objekt. Opravit takovou chybu není nijak složité.> V praxi se příliš často nestává, že by dva sloupce v jednom datasetu používaly různé znaky jako oddělovač desetinných míst. Pokud je tento problém v celé tabulce, stačí dát funkci `read_csv` pojmenovaný argument `decimal` a Pandas se o převod na desetinná čísla postará sám. ###Code data.vyska = data.vyska.str.replace(",", ".").astype(float) ###Output _____no_output_____ ###Markdown V příkazu výše se děje hned několik věcí najednou. Nejdříve na sloupci výška zavoláme metodu `str.replace`, která nám zamění všechny čárky za tečky a následně nám tento pozměněný sloupec metoda `astype` převede na sloupec s desetinnými čísly. Abychom docílili změny v tabulce, uložíme opravený sloupec zpět pod jeho původní jméno. ###Code data.info() data ###Output _____no_output_____ ###Markdown Prozatím jsme se dívali na data z pohledu jejich úplnosti a správnosti, ale zatím o nich nic moc nevíme. Pojďme se nejdříve podívat na základní typy proměnných a pak na popisnou statistiku, kterou pro ně můžeme použít. Typy proměnnýchProměnné se dají rozdělit do mnoha kategorií podle svých vlastností. My si popíšeme jen několik z nich, které nám pomohou se vyvarovat některým chybám a lépe se dorozumět při komunikaci s ostatními analytiky. Kategoriální proměnnáKategoriální proměnná osabuje informaci o kategorii do které lze daný záznam zařadit - např. barva, typ auta či typ elektrické zásuvky. Důležitou vlastností kategoriálních proměnných je to, že je nelze porovnávat. Můžeme jen říci, zda jsou stejné nebo různé, ale už ne která hodnota je větší či menší. Numerické proměnnéNumerická proměnná je označení pro více druhů proměnných, které lze vyjádřit číslem, porovnávat a provádět s nimi matematické operace. Taková proměnná může být diskrétní (celočíselná) jako např. počet válců automobilu, počet návštěvníků na oslavě, nebo spojitá (metrická) jako např. tlak, teplota či rychlost. Ordinální proměnnáOrdinální proměnná je taková, u které dává smysl rozhodovat o pořadí hodnot - např. úroveň dosaženého vzdělání, příjmová skupina atp.Důležité je si uvědomit, že i kategoriální proměnná může být v datech označena číslem stejně jako orninální proměnná může být označena slovním popisem. Je tedy vždy důležité k datům přistupovat podle jejich obsahu spíše než formy.V našem datasetu na nás čeká jedna kategoriální proměnná (pohlaví) a dvě numerické spojité (váha a výška). Pojďme se podívat, co je nám o nich Pandas schopen říci. Základní popisné statistiky ###Code data.describe() ###Output _____no_output_____ ###Markdown Metoda `describe` vezme všechny číselné sloupce tabulky a vypočítá pro ně několik základních statistických údajů. Pojďme si je společně popsat jeden po druhém. Počet, průměr, minimum a maximumPočet (count) udává celkový počet hodnot v daném sloupci. Je to tedy další způsob, jak se ujistit, že sloupce neobsahují nulové hodnoty. Průměr (mean) obsahuje aritmetický průměr - tedy součet všech hodnot podělen jejich počtem. Minimum a maximum obsahují minimální resp. maximální hodnotu pro daný sloupec. PercentilyPercentily nám umožňují dělit hodnoty ve sloupcích podle jejich zastoupení. V tomto případě se jedná o 25%, 50% a 75% percentily, které nám dělí hodnoty ve sloupcích na čtyři části. Z tohoto dělení se dá vyčíst, že 25 % lidí z našeho datasetu je menších než 161,3 cm a lehčích než 61,6 kg. Z druhého konce je možné odvodit podobnou informaci a tedy, že 25 % lidí z našeho datasetu je vyšších než 175,7 cm a těžších než 84,9 kg.> Protože máme pomocí percentilů rozdělen dataset na čtyři části, říká se jim též kvartily. V případě potřeby si pojmenovaným argumentem `percentiles` metody `describe` je možné zadat vlastní percentily. MediánMedián sice není v tabulce přímo zapsán, ale je to jen jiné označení pro 50% percentil. Je to velmi důležité číslo, protože nám udává, že polovina hodnot je pod ním a druhá polovina nad ním. O rozdílu mezi průměrem a mediánem ještě bude řeč. Standardní ochylkaStandardní (jinak též směrodatná) odchylka (standard deviation, std) je číslo označované malým řeckým písmenem sigma (σ), které nám říká, jak moc se typické hodnoty ve sloupci liší od průměru. Čím vyšší je směrodatná odchylka tím větší je rozptyl hodnot ve sloupci a naopak čím nižší je, tím blíže jsou jednotlivé hodnoty průměru.Pokud to teď nedává úplně smysl, nevadí, brzy si význam těchto hodnot ukážeme na praktických příkladech s využitím grafů. Důležitý rozdíl mezi průměrem a mediánemPojďme si zkusit demonstrovat důležitý rozdíl mezi průměrem a mediánem a co nám o datech může povědět pouhý pohled na tato dvě čísla.Pro potřeby této ukázky si vybereme deset náhodných mužů z našeho datasetu.> Za normálních okolností bychom pro náhodný výběr použili metodu `sample`, ale zde potřebujeme, aby byl výsledek reprodukovatelný při opětovém spuštění notebooku, a tak zadáme náhodně vybrané řádky ručně. ###Code vyber = data.loc[[4768, 2549, 2325, 335, 3237, 736, 3178, 3721, 3711, 2246]] vyber vyber.describe() ###Output _____no_output_____ ###Markdown Nejmenší člen naší party měří 165 cm a váží skoro 71 kg. Na opačném konci je muž vážící skoro 97 kg při 189 cm výšky. Přůměrná výška je 175,74 cm a směrodatná odchylka výšky je 7,36 cm. Průměr a medián jsou velmi podobné hodnoty. Teď si k naší skupince přisedne nějaký obr. ###Code vyber.loc[10001] = "Muž", 251, 610 # Výška nejvyššího a váha nejtěžšího muže světa vyber vyber.describe() ###Output _____no_output_____ ###Markdown Stačil jeden obézní velikán a průměrná výška nám stoupla o 6,8 cm a váha o skoro 48 kg. Společně s průměrem nám do nebes vyletěla i směrodatná odchylka, ale co medián (50% percentil)? Ten zůstal skoro nezměněn. Společně se směrodatnou odchylkou nám totiž velký rozdíl mezi mediánem a průměrem může napovědět, že se v našich datech nachází tzv. odlehlá měření - tedy hodnoty, které jsou od průměru velmi vzdálené.> Jak se s těmito odlehlými měřeními vyrovnat, si ukážeme později, teď nám stačí vědět, že je diky znalostem základní popisné statistiky dokážeme identifikovat, aniž bychom museli pročítat všechny záznamy. A co kategoriální proměnná pohlaví? K té se tolik užitečných čísel nedozvíme, ale pár jich přeci jen bude. Tak například, kolik unikátních hodnot tento sloupec obsahuje? ###Code data.pohlavi.nunique() ###Output _____no_output_____ ###Markdown Metoda `nunique` nám vrátí počet unikátních hodnot ve sloupci pohlaví. Dvojka je na tomto místě očekávaný výsledek. Pokud by to číslo bylo jiné, mohlo by to znamenat, že je ve jméně některé z kategorií na některých řádcích překlep, který se často objevuje u ručně sbíraných dat. Nezbývá než se podívat, kolik zástupců od každého pohlaví naše data obsahují. K tomu použijeme metodu `value_counts()`. ###Code data.pohlavi.value_counts() ###Output _____no_output_____ ###Markdown Jak je vidět, máme v datasetu přesně polovinu mužů a žen. Vizualizace Je velmi důležité umět porozumět popisným statistikám, které nám usnadní pochopení dat, na které se díváme. Neodmyslitelným doplňkem k tomu jsou grafy, které mohou rozkrýt další skryté vlastnosti či objasnit v číslech skryté informace.Nejprve si naimportujeme `pyplot` z knihovny `matplotlib` a pak řekneme Jupyteru, aby grafy zobrazoval přimo pod buňkou s kódem. ###Code from matplotlib import pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Histogram Mezi nejběžnější grafy datové analýzy patří histogramy. Histogram má na vodorovné ose rozprostřeny hodnoty z daného sloupce tabulky a výška kažhého sloupce nám ukazuje, kolikrát je daná hodnota zastoupena v datech. Pro přehlednost nemá každá jednotlivá hodnota svůj sloupec, ale jsou ve skupinách. Pojďme si jeden nakreslit. ###Code data.vaha.plot(kind="hist", bins=40); ###Output _____no_output_____ ###Markdown Na předchozím řádku se děje hned několi věcí najednou, tak si je pojďme rozebrat. Metoda `plot` umí ze sloupce či celé tabulky vytvořit graf. Jaký graf to bude, o tom rozhodje pojmenovaný argument `kind`. Do kolika sloupců se má histogram rozčlenit, to je nastaveno pojmenovaným argumentem `bins`.Středník na konci řádku zabrání, aby se kromě grafu vypsala i neužitečná informace ve stylu ``.Z histogramu je vidět, že místo očekávaného průměru, který bude mít v datasetu nejvíce zástupců, máme něco jako průměry dva. To může být způsobeno tím, že máme v záznamech polovinu žen a druhou mužů. Zkusme se na tyto dvě skupiny podívat zvlášť. ###Code data[data.pohlavi == "Muž"].vaha.plot.hist(bins=30); data[data.pohlavi == "Žena"].vaha.plot.hist(bins=30); ###Output _____no_output_____ ###Markdown Díky filtraci řádku před vykreslením histogramu jsme získali dva samostatné histogramy - jeden pro každé pohlaví. Za povšimnutí také stojí, že jsme zde místo pojmenovaného argumentu `kind` zvolili volání metody `plot.hist`, které funguje naprosto stejně, ale může někomu více vyhovovat.Z histogramů je patrné, že obě pohlaví mají nějakou průměrnou váhu, která je v datech zastoupena nejvíce záznamy. Je třeba se mít na pozoru, protože i když oba histogramy vypadají dost podobně, jejich vodorovná osa obsahuje zcela odlišné hodnoty a tak zatímco mužů pod 50 kg váhy je v datasetu zanedbatelné množství, u žen je to celkem početná část.Chceme-li měnit různá nastavení grafu je možné použít objektově-orientované rozhraní knihovny `matplotlib`, které nám umožní s vizualizacemi všemožně manipulovat. ###Code fig = plt.figure(figsize=(10, 5)) ax = fig.add_subplot(1, 1, 1) data[data.pohlavi == "Žena"].vaha.plot.hist(ax=ax, bins=30); ax.set_title("Histogram váhy žen"); ax.set_xlabel("Váha v kilogramech"); ###Output _____no_output_____ ###Markdown Voláním `plt.figure` vytvoříme kontejner pro celý obrázek dané velikosti (v palcích). `fig.add_subplot(1, 1, 1)` nám do tohoto obrázku přidá osy pro budoucí graf. Tři jedničky znamenají, že graf má být v pomyslné tabulce s jedním řádkem a jedním sloupcem na prvním místě.Díky tomu, že objekty reprezentující celý obrázek a graf v něm máme v samostatných proměnných, můžeme se k nim kdykoli vrátit a nastavit libovolné vlastnosti. Tady se nám pro přehlednost hodí nastavit titulek pro graf a popis osy X.Vykreslit více histogramů do jednoho místa také není problém, stačí pandasu říci, aby pro kreslení histogramu využil připravené místo v obrázku. ###Code fig = plt.figure(figsize=(10, 5)) ax = fig.add_subplot() # Tři jedničky není třeba psát pokaždé znovu data[data.pohlavi == "Žena"].vaha.plot.hist(ax=ax, bins=30); data[data.pohlavi == "Muž"].vaha.plot.hist(ax=ax, bins=30); ax.set_title("Histogram váhy mužů a žen"); ax.set_xlabel("Váha v kilogramech"); ###Output _____no_output_____ ###Markdown Sice jsme tímto spojením histogramů přišli o kousek informace, protože nevidíme jak jsou zastoupeny váhově nadprůměrné ženy, ale zase jsme získali lepší přehled o tom, jak je na tom každá ze skupin co se průměru týče.Díky možnosti vkládat vícero grafů do jednoho obrázku získáme i možnost je mezi sebou snáze porovnávat. Můžeme si zkusit vykreslit histogramy vedle sebe i pod sebou. ###Code fig = plt.figure(figsize=(10, 5)) ax1 = fig.add_subplot(2, 1, 1) # Dva řádky, jeden sloupec, první graf ax2 = fig.add_subplot(2, 1, 2, sharex=ax1) # Druhý graf, sdílená osa X data[data.pohlavi == "Žena"].vyska.plot.hist(ax=ax1, bins=30); data[data.pohlavi == "Muž"].vyska.plot.hist(ax=ax2, bins=30); fig.suptitle("Histogram výšky mužů a žen"); ax2.set_xlabel("Výška v centimetrech"); ###Output _____no_output_____ ###Markdown `ax1` a `ax2` obsahují připravené osy pro oba histogramy a při jejich vytváření jsme je jednak rozmístili do obrázku o dvou řádcích a jednom sloupci a také jsme nastavili, aby druhý graf sdílel osu X s prvním grafem, což nám velmi usnadní jejich porovnání.Velmi podobné je to s dvěma sloupci a sdílenou osou Y. ###Code fig = plt.figure(figsize=(10, 5)) ax1 = fig.add_subplot(1, 2, 1) # Jeden řádek, dva sloupce, první graf ax2 = fig.add_subplot(1, 2, 2, sharey=ax1) # Druhý graf, sdílená osa Y data[data.pohlavi == "Žena"].vyska.plot.hist(ax=ax1, bins=30); data[data.pohlavi == "Muž"].vyska.plot.hist(ax=ax2, bins=30); fig.suptitle("Histogram výšky mužů a žen"); ax1.set_xlabel("Výška v centimetrech"); ax2.set_xlabel("Výška v centimetrech"); ###Output _____no_output_____ ###Markdown Jak je vidět, histogramy pod sebou se sdílenou osou X mají v tomto případě daleko vyšší vypovídající hodnotu, protože je z nich dobře poznat distribuce výšky u obou pohlaví. Sloupcový graf Ze sloupce pohlaví si jednoduše histogram vykreslit nemůžeme, protože tento sloupec neobsahuje číselné hodnoty. My už ale víme, jak ze sloupce s kategoriální proměnnou číselné hodnoty získat a tak stačí tyto dva přístupy skombinovat a výsledek vykreslit do sloupcového grafu. ###Code data.pohlavi.value_counts().plot.bar(); ###Output _____no_output_____ ###Markdown Sloupcový graf je zde jen pro úplnost, abychom si graficky zobrazili všechny sloupce. O jeho možnostech, kterých je opravdu velké množství, bude řeč později. Krabicový grafPosledním typem grafu, který si dnes ukážeme, je krabicový graf neboli boxplot. O co menší je jeho popularita u laické veřejnosti o to více infomací obsahuje pro zkušené analytiky. Jeho intepretace není vždy triviální, ale za to nabízí opravdu hodně informací v jednom obrázku. ###Code data.plot.box(figsize=(10, 10)); ###Output _____no_output_____ ###Markdown V krabicovém grafu vidíme vyobrazeny obě numerické proměnné najednou. Zelená čára uprostřed označuje medián. Oblast označená obdelníkem (krabicí) označuje rozsah mezi 25% a 75% percentily (mezi prvním a třetím kvartilem). Krátké vodorovné čárky označují rozsah definovaný vzorcem 1,5 × IRQ, kde IRQ je tzv. inter-quartile range tedy rozsah od prvního po třetí kvartil a vypočte se jako Q3 - Q1. Co se do tohoto rozsahu nevejde, je označeno puntíkem a znamená to, že tyto hodnoty jsou brány jako odlehlá měření. Je to tedy to podstatné z popisné statistiky v kostce (krabici).Abychom si to ukázali i prakticky na konkrétních číslech. IRQ pro váhu je 84,9 (Q3) - 61,6 (Q1) = 23,3. Jeden a půl násobek IRQ je 34.95. Rozsah pro vodorovné čárky je tedy 84,9 + 34,95 = 119,85 kg na horní hranici a 61,6 - 34,95 = 26,65 kg na spodní hranici. V grafu i v tabulce je vidět, že pouze jediná hodnota se tomuto rozsahu vymyká a to je váha 122,47 kg.> Za povšimnutí stojí, že některé parametry grafu lze nastavit i přímo jako pojmenované argumenty některé k `plot` metod a ušetřit si tak další volání různých modifikací.Zkusme si teď vykreslit podobné porování hodnot váhy pro muže a ženy v krabicovém grafu. Krabicový graf je pro podobná porovnání jako stvořený, ale bohužel ne ve své obyčejnější variantě s metodou `plot.box`. Daleko mocnější je metoda `boxplot`, která umožní grafu nastavit pojmenovaným argumentem `by` sloupec, podle kterého se mají záznamy dělit do skupin a pomocí argumentu `column` vybrat jen ten sloupec, který nás zajímá.> Proč je tomu tak a existují dvě metody na stejnou práci bohužel netuším. V nástrojích na datovou analytiku se často setkáš s přístupem, kdy jeden problém lze řešit mnoha různými způsoby a je jen na tobě, který si vybereš a oblíbíš. ###Code data.boxplot(column="vaha", by="pohlavi", figsize=(10, 10)); ###Output _____no_output_____ ###Markdown Explorační datová analýza Explorační datová analýza (zkráceně EDA) je soubor technik a metod, které se používají k hledání zajímavých informací v datech a tvorbě hypotéz, které je následně možné testovat. EDA je často úplně první krok, který se s daty provádí a který do dalších analýz přináší nejen hodně užitečných poznatků o datech, ale také data samotná připravená k dalšímu zpracování.Dnes se společně podíváme na to, jak správně data načíst, odhalit základní chyby, zobrazit souhrné informace a následně provedeme analýzu jednotlivých proměnných a dojde i na základní vizualizace.Data k analýze jsou připravena v tabulce [vaha-vyska.csv](static/vaha-vyska.csv). ###Code import pandas as pd ###Output _____no_output_____ ###Markdown Načtení a kontrola dat Opět máme data ve formátu CSV a tak si je načteme ze souboru funkcí `read_csv`. ###Code data = pd.read_csv("static/vaha-vyska.csv") ###Output _____no_output_____ ###Markdown Načtení se zřejmě povedlo, ale jistotu získáme, až když si data prohlédneme. ###Code data ###Output _____no_output_____ ###Markdown Výsledek sice vypadá jako tabulka, ale zcela v pořádku není. Jak je vidět, máme hodnoty výšky a váhy i v prvním řádku, kde bychom spíše očekávali názvy sloupců. Občas se stane, že názvy sloupců nejsou přímo v CSV souboru, ale jsou dodávány samostatně v odděleném dokumentu často i s vysvětlivkami, co jednotlivé sloupce znamenají a jaké hodnoty obsahují.V tomto případě je snadné odhadnout, že první sloupec bude obsahovat pohlaví, druhý výšku a třetí váhu, ale je samozřejmě lepší se o tom vždy přesvědčit u zdroje dat.Pojďme tedy načíst data znovu a názvy sloupců dodat ručně. ###Code data = pd.read_csv("static/vaha-vyska.csv", names=["pohlavi", "vyska", "vaha"]) data ###Output _____no_output_____ ###Markdown K dispozici máme tabulku s deseti tisíci záznamy. Zatím si ale nemůžeme být jisti, že všechny záznamy obsahují všechny hodnoty. Bývá dobrým zvykem se podívat, kolik je v datasetu tzv. nulových hodnot. Neméně užitečnou informací pro nás je, zda Pandas správně rozeznal datové typy jednotlivých proměnných, o kterých ze zdrojového CSV nedostane žádnou informaci a tak je musí odhadnout z obsahu sloupců. Oboje se dozvíme s pomocí metody `info`.> V dalších lekcích se společně podíváme, jak se s nulovými hodnotami vypořádat. ###Code data.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 10000 entries, 0 to 9999 Data columns (total 3 columns): pohlavi 10000 non-null object vyska 10000 non-null object vaha 10000 non-null float64 dtypes: float64(1), object(2) memory usage: 234.5+ KB ###Markdown Kromě informace o nulových hodnotách a datových typech jsme se ještě dozvěděli, kolik naše tabulka zabírá v paměti a jaký používá index.> Datové typy jsou podobné těm v Pythonu, ale přeci jen se mírně odlišují. Jejich kompletní seznam je možné nalézt v dokumentaci k [NumPy](https://numpy.org/devdocs/user/basics.types.htmldata-types).Ti pozorní si všimnou, že i když očekáváme ve sloupci s váhou a výškou stejný datový typ (desetinné číslo neboli `float`), je výška označena jako `object`. Ti ještě pozornější už zahlédli i důvod - desetinná čárka místo desetinné tečky. Zatímco sloupec váha obsahuje desetinné tečky a je tedy správně identifikován jako číselný, sloupec s výškou obsahuje desetinné čárky a tak jej Pandas považuje za obecný objekt. Opravit takovou chybu není nijak složité.> V praxi se příliš často nestává, že by dva sloupce v jednom datasetu používaly různé znaky jako oddělovač desetinných míst. Pokud je tento problém v celé tabulce, stačí dát funkci `read_csv` pojmenovaný argument `decimal` a Pandas se o převod na desetinná čísla postará sám. ###Code data.vyska = data.vyska.str.replace(",", ".").astype(float) ###Output _____no_output_____ ###Markdown V příkazu výše se děje hned několik věcí najednou. Nejdříve na sloupci výška zavoláme metodu `str.replace`, která nám zamění všechny čárky za tečky a následně nám tento pozměněný sloupec metoda `astype` převede na sloupec s desetinnými čísly. Abychom docílili změny v tabulce, uložíme opravený sloupec zpět pod jeho původní jméno. ###Code data.info() data ###Output _____no_output_____ ###Markdown Prozatím jsme se dívali na data z pohledu jejich úplnosti a správnosti, ale zatím o nich nic moc nevíme. Pojďme se nejdříve podívat na základní typy proměnných a pak na popisnou statistiku, kterou pro ně můžeme použít. Typy proměnnýchProměnné se dají rozdělit do mnoha kategorií podle svých vlastností. My si popíšeme jen několik z nich, které nám pomohou se vyvarovat některým chybám a lépe se dorozumět při komunikaci s ostatními analytiky. Kategoriální proměnnáKategoriální proměnná osabuje informaci o kategorii do které lze daný záznam zařadit - např. barva, typ auta či typ elektrické zásuvky. Důležitou vlastností kategoriálních proměnných je to, že je nelze porovnávat. Můžeme jen říci, zda jsou stejné nebo různé, ale už ne která hodnota je větší či menší. Numerické proměnnéNumerická proměnná je označení pro více druhů proměnných, které lze vyjádřit číslem, porovnávat a provádět s nimi matematické operace. Taková proměnná může být diskrétní (celočíselná) jako např. počet válců automobilu, počet návštěvníků na oslavě, nebo spojitá (metrická) jako např. tlak, teplota či rychlost. Ordinální proměnnáOrdinální proměnná je taková, u které dává smysl rozhodovat o pořadí hodnot - např. úroveň dosaženého vzdělání, příjmová skupina atp.Důležité je si uvědomit, že i kategoriální proměnná může být v datech označena číslem stejně jako orninální proměnná může být označena slovním popisem. Je tedy vždy důležité k datům přistupovat podle jejich obsahu spíše než formy.V našem datasetu na nás čeká jedna kategoriální proměnná (pohlaví) a dvě numerické spojité (váha a výška). Pojďme se podívat, co je nám o nich Pandas schopen říci. Základní popisné statistiky ###Code data.describe() ###Output _____no_output_____ ###Markdown Metoda `describe` vezme všechny číselné sloupce tabulky a vypočítá pro ně několik základních statistických údajů. Pojďme si je společně popsat jeden po druhém. Počet, průměr, minimum a maximumPočet (count) udává celkový počet hodnot v daném sloupci. Je to tedy další způsob, jak se ujistit, že sloupce neobsahují nulové hodnoty. Průměr (mean) obsahuje aritmetický průměr - tedy součet všech hodnot podělen jejich počtem. Minimum a maximum obsahují minimální resp. maximální hodnotu pro daný sloupec. PercentilyPercentily nám umožňují dělit hodnoty ve sloupcích podle jejich zastoupení. V tomto případě se jedná o 25%, 50% a 75% percentily, které nám dělí hodnoty ve sloupcích na čtyři části. Z tohoto dělení se dá vyčíst, že 25 % lidí z našeho datasetu je menších než 161,3 cm a lehčích než 61,6 kg. Z druhého konce je možné odvodit podobnou informaci a tedy, že 25 % lidí z našeho datasetu je vyšších než 175,7 cm a těžších než 84,9 kg.> Protože máme pomocí percentilů rozdělen dataset na čtyři části, říká se jim též kvartily. V případě potřeby si pojmenovaným argumentem `percentiles` metody `describe` je možné zadat vlastní percentily. MediánMedián sice není v tabulce přímo zapsán, ale je to jen jiné označení pro 50% percentil. Je to velmi důležité číslo, protože nám udává, že polovina hodnot je pod ním a druhá polovina nad ním. O rozdílu mezi průměrem a mediánem ještě bude řeč. Standardní ochylkaStandardní (jinak též směrodatná) odchylka (standard deviation, std) je číslo označované malým řeckým písmenem sigma (σ), které nám říká, jak moc se typické hodnoty ve sloupci liší od průměru. Čím vyšší je směrodatná odchylka tím větší je rozptyl hodnot ve sloupci a naopak čím nižší je, tím blíže jsou jednotlivé hodnoty průměru.Pokud to teď nedává úplně smysl, nevadí, brzy si význam těchto hodnot ukážeme na praktických příkladech s využitím grafů. Důležitý rozdíl mezi průměrem a mediánemPojďme si zkusit demonstrovat důležitý rozdíl mezi průměrem a mediánem a co nám o datech může povědět pouhý pohled na tato dvě čísla.Pro potřeby této ukázky si vybereme deset náhodných mužů z našeho datasetu.> Za normálních okolností bychom pro náhodný výběr použili metodu `sample`, ale zde potřebujeme, aby byl výsledek reprodukovatelný při opětovém spuštění notebooku, a tak zadáme náhodně vybrané řádky ručně. ###Code vyber = data.loc[[4768, 2549, 2325, 335, 3237, 736, 3178, 3721, 3711, 2246]] vyber vyber.describe() ###Output _____no_output_____ ###Markdown Nejmenší člen naší party měří 165 cm a váží skoro 71 kg. Na opačném konci je muž vážící skoro 97 kg při 189 cm výšky. Přůměrná výška je 175,74 cm a směrodatná odchylka výšky je 7,36 cm. Průměr a medián jsou velmi podobné hodnoty. Teď si k naší skupince přisedne nějaký obr. ###Code vyber.loc[10001] = "Muž", 251, 610 # Výška nejvyššího a váha nejtěžšího muže světa vyber vyber.describe() ###Output _____no_output_____ ###Markdown Stačil jeden obézní velikán a průměrná výška nám stoupla o 6,8 cm a váha o skoro 48 kg. Společně s průměrem nám do nebes vyletěla i směrodatná odchylka, ale co medián (50% percentil)? Ten zůstal skoro nezměněn. Společně se směrodatnou odchylkou nám totiž velký rozdíl mezi mediánem a průměrem může napovědět, že se v našich datech nachází tzv. odlehlá měření - tedy hodnoty, které jsou od průměru velmi vzdálené.> Jak se s těmito odlehlými měřeními vyrovnat, si ukážeme později, teď nám stačí vědět, že je diky znalostem základní popisné statistiky dokážeme identifikovat, aniž bychom museli pročítat všechny záznamy. A co kategoriální proměnná pohlaví? K té se tolik užitečných čísel nedozvíme, ale pár jich přeci jen bude. Tak například, kolik unikátních hodnot tento sloupec obsahuje? ###Code data.pohlavi.nunique() ###Output _____no_output_____ ###Markdown Metoda `nunique` nám vrátí počet unikátních hodnot ve sloupci pohlaví. Dvojka je na tomto místě očekávaný výsledek. Pokud by to číslo bylo jiné, mohlo by to znamenat, že je ve jméně některé z kategorií na některých řádcích překlep, který se často objevuje u ručně sbíraných dat. Nezbývá než se podívat, kolik zástupců od každého pohlaví naše data obsahují. K tomu použijeme metodu `value_counts()`. ###Code data.pohlavi.value_counts() ###Output _____no_output_____ ###Markdown Jak je vidět, máme v datasetu přesně polovinu mužů a žen. Vizualizace Je velmi důležité umět porozumět popisným statistikám, které nám usnadní pochopení dat, na které se díváme. Neodmyslitelným doplňkem k tomu jsou grafy, které mohou rozkrýt další skryté vlastnosti či objasnit v číslech skryté informace.Nejprve si naimportujeme `pyplot` z knihovny `matplotlib` a pak řekneme Jupyteru, aby grafy zobrazoval přimo pod buňkou s kódem. ###Code from matplotlib import pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Histogram Mezi nejběžnější grafy datové analýzy patří histogramy. Histogram má na vodorovné ose rozprostřeny hodnoty z daného sloupce tabulky a výška kažhého sloupce nám ukazuje, kolikrát je daná hodnota zastoupena v datech. Pro přehlednost nemá každá jednotlivá hodnota svůj sloupec, ale jsou ve skupinách. Pojďme si jeden nakreslit. ###Code data.vaha.plot(kind="hist", bins=40); ###Output _____no_output_____ ###Markdown Na předchozím řádku se děje hned několi věcí najednou, tak si je pojďme rozebrat. Metoda `plot` umí ze sloupce či celé tabulky vytvořit graf. Jaký graf to bude, o tom rozhodje pojmenovaný argument `kind`. Do kolika sloupců se má histogram rozčlenit, to je nastaveno pojmenovaným argumentem `bins`.Středník na konci řádku zabrání, aby se kromě grafu vypsala i neužitečná informace ve stylu ``.Z histogramu je vidět, že místo očekávaného průměru, který bude mít v datasetu nejvíce zástupců, máme něco jako průměry dva. To může být způsobeno tím, že máme v záznamech polovinu žen a druhou mužů. Zkusme se na tyto dvě skupiny podívat zvlášť. ###Code data[data.pohlavi == "Muž"].vaha.plot.hist(bins=30); data[data.pohlavi == "Žena"].vaha.plot.hist(bins=30); ###Output _____no_output_____ ###Markdown Díky filtraci řádku před vykreslením histogramu jsme získali dva samostatné histogramy - jeden pro každé pohlaví. Za povšimnutí také stojí, že jsme zde místo pojmenovaného argumentu `kind` zvolili volání metody `plot.hist`, které funguje naprosto stejně, ale může někomu více vyhovovat.Z histogramů je patrné, že obě pohlaví mají nějakou průměrnou váhu, která je v datech zastoupena nejvíce záznamy. Je třeba se mít na pozoru, protože i když oba histogramy vypadají dost podobně, jejich vodorovná osa obsahuje zcela odlišné hodnoty a tak zatímco mužů pod 50 kg váhy je v datasetu zanedbatelné množství, u žen je to celkem početná část.Chceme-li měnit různá nastavení grafu je možné použít objektově-orientované rozhraní knihovny `matplotlib`, které nám umožní s vizualizacemi všemožně manipulovat. ###Code fig = plt.figure(figsize=(10, 5)) ax = fig.add_subplot(1, 1, 1) data[data.pohlavi == "Žena"].vaha.plot.hist(ax=ax, bins=30); ax.set_title("Histogram váhy žen"); ax.set_xlabel("Váha v kilogramech"); ###Output _____no_output_____ ###Markdown Voláním `plt.figure` vytvoříme kontejner pro celý obrázek dané velikosti (v palcích). `fig.add_subplot(1, 1, 1)` nám do tohoto obrázku přidá osy pro budoucí graf. Tři jedničky znamenají, že graf má být v pomyslné tabulce s jedním řádkem a jedním sloupcem na prvním místě.Díky tomu, že objekty reprezentující celý obrázek a graf v něm máme v samostatných proměnných, můžeme se k nim kdykoli vrátit a nastavit libovolné vlastnosti. Tady se nám pro přehlednost hodí nastavit titulek pro graf a popis osy X.Vykreslit více histogramů do jednoho místa také není problém, stačí pandasu říci, aby pro kreslení histogramu využil připravené místo v obrázku. ###Code fig = plt.figure(figsize=(10, 5)) ax = fig.add_subplot() # Tři jedničky není třeba psát pokaždé znovu data[data.pohlavi == "Žena"].vaha.plot.hist(ax=ax, bins=30); data[data.pohlavi == "Muž"].vaha.plot.hist(ax=ax, bins=30); ax.set_title("Histogram váhy mužů a žen"); ax.set_xlabel("Váha v kilogramech"); ###Output _____no_output_____ ###Markdown Sice jsme tímto spojením histogramů přišli o kousek informace, protože nevidíme jak jsou zastoupeny váhově nadprůměrné ženy, ale zase jsme získali lepší přehled o tom, jak je na tom každá ze skupin co se průměru týče.Díky možnosti vkládat vícero grafů do jednoho obrázku získáme i možnost je mezi sebou snáze porovnávat. Můžeme si zkusit vykreslit histogramy vedle sebe i pod sebou. ###Code fig = plt.figure(figsize=(10, 5)) ax1 = fig.add_subplot(2, 1, 1) # Dva řádky, jeden sloupec, první graf ax2 = fig.add_subplot(2, 1, 2, sharex=ax1) # Druhý graf, sdílená osa X data[data.pohlavi == "Žena"].vyska.plot.hist(ax=ax1, bins=30); data[data.pohlavi == "Muž"].vyska.plot.hist(ax=ax2, bins=30); fig.suptitle("Histogram výšky mužů a žen"); ax2.set_xlabel("Výška v centimetrech"); ###Output _____no_output_____ ###Markdown `ax1` a `ax2` obsahují připravené osy pro oba histogramy a při jejich vytváření jsme je jednak rozmístili do obrázku o dvou řádcích a jednom sloupci a také jsme nastavili, aby druhý graf sdílel osu X s prvním grafem, což nám velmi usnadní jejich porovnání.Velmi podobné je to s dvěma sloupci a sdílenou osou Y. ###Code fig = plt.figure(figsize=(10, 5)) ax1 = fig.add_subplot(1, 2, 1) # Jeden řádek, dva sloupce, první graf ax2 = fig.add_subplot(1, 2, 2, sharey=ax1) # Druhý graf, sdílená osa Y data[data.pohlavi == "Žena"].vyska.plot.hist(ax=ax1, bins=30); data[data.pohlavi == "Muž"].vyska.plot.hist(ax=ax2, bins=30); fig.suptitle("Histogram výšky mužů a žen"); ax1.set_xlabel("Výška v centimetrech"); ax2.set_xlabel("Výška v centimetrech"); ###Output _____no_output_____ ###Markdown Jak je vidět, histogramy pod sebou se sdílenou osou X mají v tomto případě daleko vyšší vypovídající hodnotu, protože je z nich dobře poznat distribuce výšky u obou pohlaví. Sloupcový graf Ze sloupce pohlaví si jednoduše histogram vykreslit nemůžeme, protože tento sloupec neobsahuje číselné hodnoty. My už ale víme, jak ze sloupce s kategoriální proměnnou číselné hodnoty získat a tak stačí tyto dva přístupy skombinovat a výsledek vykreslit do sloupcového grafu. ###Code data.pohlavi.value_counts().plot.bar(); ###Output _____no_output_____ ###Markdown Sloupcový graf je zde jen pro úplnost, abychom si graficky zobrazili všechny sloupce. O jeho možnostech, kterých je opravdu velké množství, bude řeč později. Krabicový grafPosledním typem grafu, který si dnes ukážeme, je krabicový graf neboli boxplot. O co menší je jeho popularita u laické veřejnosti o to více infomací obsahuje pro zkušené analytiky. Jeho intepretace není vždy triviální, ale za to nabízí opravdu hodně informací v jednom obrázku. ###Code data.plot.box(figsize=(10, 10)); ###Output _____no_output_____ ###Markdown V krabicovém grafu vidíme vyobrazeny obě numerické proměnné najednou. Zelená čára uprostřed označuje medián. Oblast označená obdelníkem (krabicí) označuje rozsah mezi 25% a 75% percentily (mezi prvním a třetím kvartilem). Krátké vodorovné čárky označují rozsah definovaný vzorcem 1,5 × IQR, kde IQR je tzv. inter-quartile range tedy rozsah od prvního po třetí kvartil a vypočte se jako Q3 - Q1. Co se do tohoto rozsahu nevejde, je označeno puntíkem a znamená to, že tyto hodnoty jsou brány jako odlehlá měření. Je to tedy to podstatné z popisné statistiky v kostce (krabici).Abychom si to ukázali i prakticky na konkrétních číslech. IQR pro váhu je 84,9 (Q3) - 61,6 (Q1) = 23,3. Jeden a půl násobek IQR je 34.95. Rozsah pro vodorovné čárky je tedy 84,9 + 34,95 = 119,85 kg na horní hranici a 61,6 - 34,95 = 26,65 kg na spodní hranici. V grafu i v tabulce je vidět, že pouze jediná hodnota se tomuto rozsahu vymyká a to je váha 122,47 kg.> Za povšimnutí stojí, že některé parametry grafu lze nastavit i přímo jako pojmenované argumenty některé k `plot` metod a ušetřit si tak další volání různých modifikací.Zkusme si teď vykreslit podobné porování hodnot váhy pro muže a ženy v krabicovém grafu. Krabicový graf je pro podobná porovnání jako stvořený, ale bohužel ne ve své obyčejnější variantě s metodou `plot.box`. Daleko mocnější je metoda `boxplot`, která umožní grafu nastavit pojmenovaným argumentem `by` sloupec, podle kterého se mají záznamy dělit do skupin a pomocí argumentu `column` vybrat jen ten sloupec, který nás zajímá.> Proč je tomu tak a existují dvě metody na stejnou práci bohužel netuším. V nástrojích na datovou analytiku se často setkáš s přístupem, kdy jeden problém lze řešit mnoha různými způsoby a je jen na tobě, který si vybereš a oblíbíš. ###Code data.boxplot(column="vaha", by="pohlavi", figsize=(10, 10)); ###Output _____no_output_____
PULP/tutorial/1.35 .ipynb
###Markdown It is a guard against a stack overflow, yes. Python (or rather, the CPython implementation) doesn't optimize tail recursion, and unbridled recursion causes stack overflows. You can check the recursion limit with sys.getrecursionlimit and change the recursion limit with sys.setrecursionlimit, but doing so is dangerous -- the standard limit is a little conservative, but Python stackframes can be quite big. Python isn't a functional language and tail recursion is not a particularly efficient technique. Rewriting the algorithm iteratively, if possible, is generally a better idea. ###Code import sys sys.setrecursionlimit(1500) ###Output _____no_output_____
CVMusicSynthesis/object_detection_tutorial_Webcam_WORKING_ANDROID-Copy2.ipynb
###Markdown Imports ###Code # def _process_pathnames(fname, label_path): # # We map this function onto each pathname pair # img_str = tf.read_file(fname) # img = tf.image.decode_jpeg(img_str, channels=3) # label_img_str = tf.read_file(label_path) # # These are gif images so they return as (num_frames, h, w, c) # label_img = tf.image.decode_gif(label_img_str)[0] # # The label image should only have values of 1 or 0, indicating pixel wise # # object (car) or not (background). We take the first channel only. # label_img = label_img[:, :, 0] # label_img = tf.expand_dims(label_img, axis=-1) # return img, label_img # def shift_img(output_img, label_img, width_shift_range, height_shift_range): # """This fn will perform the horizontal or vertical shift""" # if width_shift_range or height_shift_range: # if width_shift_range: # width_shift_range = tf.random_uniform([], # -width_shift_range * img_shape[1], # width_shift_range * img_shape[1]) # if height_shift_range: # height_shift_range = tf.random_uniform([], # -height_shift_range * img_shape[0], # height_shift_range * img_shape[0]) # # Translate both # output_img = tfcontrib.image.translate(output_img, # [width_shift_range, height_shift_range]) # label_img = tfcontrib.image.translate(label_img, # [width_shift_range, height_shift_range]) # return output_img, label_img # def flip_img(horizontal_flip, tr_img, label_img): # if horizontal_flip: # flip_prob = tf.random_uniform([], 0.0, 1.0) # tr_img, label_img = tf.cond(tf.less(flip_prob, 0.5), # lambda: (tf.image.flip_left_right(tr_img), tf.image.flip_left_right(label_img)), # lambda: (tr_img, label_img)) # return tr_img, label_img # def _augment(img, # label_img, # resize=None, # Resize the image to some size e.g. [256, 256] # scale=1, # Scale image e.g. 1 / 255. # hue_delta=0, # Adjust the hue of an RGB image by random factor # horizontal_flip=False, # Random left right flip, # width_shift_range=0, # Randomly translate the image horizontally # height_shift_range=0): # Randomly translate the image vertically # if resize is not None: # # Resize both images # label_img = tf.image.resize_images(label_img, resize) # img = tf.image.resize_images(img, resize) # if hue_delta: # img = tf.image.random_hue(img, hue_delta) # img, label_img = flip_img(horizontal_flip, img, label_img) # img, label_img = shift_img(img, label_img, width_shift_range, height_shift_range) # label_img = tf.to_float(label_img) * scale # img = tf.to_float(img) * scale # return img, label_img # def get_baseline_dataset(filenames, # labels, # preproc_fn=functools.partial(_augment), # threads=5, # batch_size=batch_size, # shuffle=True): # num_x = len(filenames) # # Create a dataset from the filenames and labels # dataset = tf.data.Dataset.from_tensor_slices((filenames, labels)) # # Map our preprocessing function to every element in our dataset, taking # # advantage of multithreading # dataset = dataset.map(_process_pathnames, num_parallel_calls=threads) # if preproc_fn.keywords is not None and 'resize' not in preproc_fn.keywords: # assert batch_size == 1, "Batching images must be of the same size" # dataset = dataset.map(preproc_fn, num_parallel_calls=threads) # if shuffle: # dataset = dataset.shuffle(num_x) # # It's necessary to repeat our data for all epochs # dataset = dataset.repeat().batch(batch_size) # return dataset # tr_cfg = { # 'resize': [img_shape[0], img_shape[1]], # 'scale': 1 / 255., # 'hue_delta': 0.1, # 'horizontal_flip': True, # 'width_shift_range': 0.1, # 'height_shift_range': 0.1 # } # tr_preprocessing_fn = functools.partial(_augment, **tr_cfg) # val_cfg = { # 'resize': [img_shape[0], img_shape[1]], # 'scale': 1 / 255., # } # val_preprocessing_fn = functools.partial(_augment, **val_cfg) # train_ds = get_baseline_dataset(x_train_filenames, # y_train_filenames, # preproc_fn=tr_preprocessing_fn, # batch_size=batch_size) # val_ds = get_baseline_dataset(x_val_filenames, # y_val_filenames, # preproc_fn=val_preprocessing_fn, # batch_size=batch_size) # # Implementation of Inception-v4 architecture # # Author: Shobhit Lamba # # e-mail: [email protected] # # Importing the libraries # from keras.layers import Input # from keras.layers.merge import concatenate # from keras.layers import Dense, Dropout, Flatten, Activation, Conv2D # from keras.layers.convolutional import MaxPooling2D, AveragePooling2D # from keras.layers.normalization import BatchNormalization # from keras.models import Model # def conv_block(x, nb_filter, nb_row, nb_col, padding = "same", strides = (1, 1), use_bias = False): # '''Defining a Convolution block that will be used throughout the network.''' # x = Conv2D(nb_filter, (nb_row, nb_col), strides = strides, padding = padding, use_bias = use_bias)(x) # x = BatchNormalization(axis = -1, momentum = 0.9997, scale = False)(x) # x = Activation("relu")(x) # return x # def stem(input): # '''The stem of the pure Inception-v4 and Inception-ResNet-v2 networks. This is input part of those networks.''' # # Input shape is 299 * 299 * 3 (Tensorflow dimension ordering) # x = conv_block(input, 32, 3, 3, strides = (2, 2), padding = "same") # 149 * 149 * 32 # x = conv_block(x, 32, 3, 3, padding = "same") # 147 * 147 * 32 # x = conv_block(x, 64, 3, 3) # 147 * 147 * 64 # x1 = MaxPooling2D((3, 3), strides = (2, 2), padding = "same")(x) # x2 = conv_block(x, 96, 3, 3, strides = (2, 2), padding = "same") # x = concatenate([x1, x2], axis = -1) # 73 * 73 * 160 # x1 = conv_block(x, 64, 1, 1) # x1 = conv_block(x1, 96, 3, 3, padding = "same") # x2 = conv_block(x, 64, 1, 1) # x2 = conv_block(x2, 64, 1, 7) # x2 = conv_block(x2, 64, 7, 1) # x2 = conv_block(x2, 96, 3, 3, padding = "same") # x = concatenate([x1, x2], axis = -1) # 71 * 71 * 192 # x1 = conv_block(x, 192, 3, 3, strides = (2, 2), padding = "same") # x2 = MaxPooling2D((3, 3), strides = (2, 2), padding = "same")(x) # x = concatenate([x1, x2], axis = -1) # 35 * 35 * 384 # return x # def inception_A(input): # '''Architecture of Inception_A block which is a 35 * 35 grid module.''' # a1 = AveragePooling2D((3, 3), strides = (1, 1), padding = "same")(input) # a1 = conv_block(a1, 96, 1, 1) # a2 = conv_block(input, 96, 1, 1) # a3 = conv_block(input, 64, 1, 1) # a3 = conv_block(a3, 96, 3, 3) # a4 = conv_block(input, 64, 1, 1) # a4 = conv_block(a4, 96, 3, 3) # a4 = conv_block(a4, 96, 3, 3) # merged = concatenate([a1, a2, a3, a4], axis = -1) # return merged # def inception_B(input): # '''Architecture of Inception_B block which is a 17 * 17 grid module.''' # b1 = AveragePooling2D((3, 3), strides = (1, 1), padding = "same")(input) # b1 = conv_block(b1, 128, 1, 1) # b2 = conv_block(input, 384, 1, 1) # b3 = conv_block(input, 192, 1, 1) # b3 = conv_block(b3, 224, 1, 7) # b3 = conv_block(b3, 256, 7, 1) # b4 = conv_block(input, 192, 1, 1) # b4 = conv_block(b4, 192, 7, 1) # b4 = conv_block(b4, 224, 1, 7) # b4 = conv_block(b4, 224, 7, 1) # b4 = conv_block(b4, 256, 1, 7) # merged = concatenate([b1, b2, b3, b4], axis = -1) # return merged # def inception_C(input): # '''Architecture of Inception_C block which is a 8 * 8 grid module.''' # c1 = AveragePooling2D((3, 3), strides = (1, 1), padding = "same")(input) # c1 = conv_block(c1, 256, 1, 1) # c2 = conv_block(input, 256, 1, 1) # c3 = conv_block(input, 384, 1, 1) # c31 = conv_block(c2, 256, 1, 3) # c32 = conv_block(c2, 256, 3, 1) # c3 = concatenate([c31, c32], axis = -1) # c4 = conv_block(input, 384, 1, 1) # c4 = conv_block(c3, 448, 3, 1) # c4 = conv_block(c3, 512, 1, 3) # c41 = conv_block(c3, 256, 1, 3) # c42 = conv_block(c3, 256, 3, 1) # c4 = concatenate([c41, c42], axis = -1) # merged = concatenate([c1, c2, c3, c4], axis = -1) # return merged # def reduction_A(input, k = 192, l = 224, m = 256, n = 384): # '''Architecture of a 35 * 35 to 17 * 17 Reduction_A block.''' # ra1 = MaxPooling2D((3, 3), strides = (2, 2), padding = "same")(input) # ra2 = conv_block(input, n, 3, 3, strides = (2, 2), padding = "same") # ra3 = conv_block(input, k, 1, 1) # ra3 = conv_block(ra3, l, 3, 3) # ra3 = conv_block(ra3, m, 3, 3, strides = (2, 2), padding = "same") # merged = concatenate([ra1, ra2, ra3], axis = -1) # return merged # def reduction_B(input): # '''Architecture of a 17 * 17 to 8 * 8 Reduction_B block.''' # rb1 = MaxPooling2D((3, 3), strides = (2, 2), padding = "same")(input) # rb2 = conv_block(input, 192, 1, 1) # rb2 = conv_block(rb2, 192, 3, 3, strides = (2, 2), padding = "same") # rb3 = conv_block(input, 256, 1, 1) # rb3 = conv_block(rb3, 256, 1, 7) # rb3 = conv_block(rb3, 320, 7, 1) # rb3 = conv_block(rb3, 320, 3, 3, strides = (2, 2), padding = "same") # merged = concatenate([rb1, rb2, rb3], axis = -1) # return merged # def inception_v4(nb_classes = 1001, load_weights = True): # '''Creates the Inception_v4 network.''' # init = Input((299, 299, 3)) # Channels last, as using Tensorflow backend with Tensorflow image dimension ordering # # Input shape is 299 * 299 * 3 # x = stem(init) # Output: 35 * 35 * 384 # # 4 x Inception A # for i in range(4): # x = inception_A(x) # # Output: 35 * 35 * 384 # # Reduction A # x = reduction_A(x, k = 192, l = 224, m = 256, n = 384) # Output: 17 * 17 * 1024 # # 7 x Inception B # for i in range(7): # x = inception_B(x) # # Output: 17 * 17 * 1024 # # Reduction B # x = reduction_B(x) # Output: 8 * 8 * 1536 # # 3 x Inception C # for i in range(3): # x = inception_C(x) # # Output: 8 * 8 * 1536 # # Average Pooling # x = AveragePooling2D((8, 8))(x) # Output: 1536 # # Dropout # x = Dropout(0.2)(x) # Keep dropout 0.2 as mentioned in the paper # x = Flatten()(x) # Output: 1536 # # Output layer # output = Dense(units = nb_classes, activation = "softmax")(x) # Output: 1000 # model = Model(init, output, name = "Inception-v4") # return model # if __name__ == "__main__": # inception_v4 = inception_v4() # inception_v4.summary() import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image if tf.__version__ < '1.4.0': raise ImportError('Please upgrade your tensorflow installation to v1.4.* or later!') ###Output _____no_output_____ ###Markdown Env setup ###Code # This is needed to display the images. %matplotlib inline # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") ###Output _____no_output_____ ###Markdown Object detection importsHere are the imports from the object detection module. ###Code from utils import label_map_util from utils import visualization_utils_Copy1 as vis_util ###Output _____no_output_____ ###Markdown Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. ###Code # # What model to download. # MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17' # # MODEL_NAME = 'faster_rcnn_resnet101_kitti_2017_11_08' # # MODEL_NAME = 'faster_rcnn_inception_resnet_v2_atrous_oid' # MODEL_FILE = MODEL_NAME + '.tar.gz' # DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/' # # Path to frozen detection graph. This is the actual model that is used for the object detection. # PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb' # # List of the strings that is used to add correct label for each box. # PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt') # NUM_CLASSES = 90 ###Output _____no_output_____ ###Markdown Download Model ###Code # decent performance # # MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17' # MODEL_NAME = 'rfcn_resnet101_coco_2017_11_08' # opener = urllib.request.URLopener() # print(1) # opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE) # print(1) # MODEL_NAME = 'faster_rcnn_resnet101_kitti_2017_11_08' # MODEL_NAME = 'faster_rcnn_inception_resnet_v2_atrous_lowproposals_oid_2017_11_08' # MODEL_NAME = 'ssd_inception_v2_coco_2017_11_17 (1)' # MODEL_NAME = 'facessd_mobilenet_v2_quantized_320x320_open_image_v4' # MODEL_NAME = 'faster_rcnn_resnet101_ava_v2.1_2018_04_30' # MODEL_NAME = 'faster_rcnn_inception_resnet_v2_atrous_lowproposals_oid_2018_01_28' # MODEL_NAME = 'inception_v4_2016_09_09' # MODEL_NAME = 'mobilenet_v1_0.25_128_quant' MODEL_NAME = 'mask_rcnn_resnet50_atrous_coco_2018_01_28' # MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17' MODEL_NAME = 'rfcn_resnet101_coco_2017_11_08' MODEL_FILE = MODEL_NAME + '.tar.gz' PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb' # PATH_TO_CKPT = 'inception_v4.ckpt' PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt') NUM_CLASSES = 90 tar_file = tarfile.open(MODEL_FILE) print(1) for file in tar_file.getmembers(): file_name = os.path.basename(file.name) if 'frozen_inference_graph.pb' in file_name: # if 'inception_v4.ckpt' in file_name: tar_file.extract(file, os.getcwd()) ###Output 1 ###Markdown Load a (frozen) Tensorflow model into memory. ###Code detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') ###Output _____no_output_____ ###Markdown Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine ###Code label_map = label_map_util.load_labelmap(PATH_TO_LABELS) categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True) category_index = label_map_util.create_category_index(categories) namelist=[] indexlist=[] for i in category_index.values(): namelist.append(i['name']) indexlist.append(i['id']) # fname="coco-labels-2014_2017.txt" # with open(fname) as f: # content = f.readlines() # content = [x.strip() for x in content] # # content import cv2 cap=cv2.VideoCapture(0) # 0 stands for very first webcam attach # https://www.youtube.com/watch?v=BUrR6BTx6Mk filename="outputtest.avi"#[place were i stored my output file] codec=cv2.VideoWriter_fourcc('m','p','4','v')#fourcc stands for four character code framerate=30 resolution=(640,480) item_lost = False # item_lost = False counter=0 tmarker=0 VideoFileOutput=cv2.VideoWriter(filename,codec,framerate, resolution) with detection_graph.as_default(): with tf.Session(graph=detection_graph) as sess: ret=True while (ret): ret, image_np=cap.read() f = open('output.txt', 'r') x = [x.replace("\n", "") for x in f.readlines()][-1] for i in range(len(namelist)): if namelist[i] in x: # print(content[i], i) tmarker=indexlist[i] # print(tmarker) if tmarker != -1: item_lost = True if "found" in x: tmarker=-1 # Definite input and output Tensors for detection_graph image_tensor = detection_graph.get_tensor_by_name('image_tensor:0') # Each box represents a part of the image where a particular object was detected. detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0') # Each score represent how level of confidence for each of the objects. # Score is shown on the result image, together with the class label. detection_scores = detection_graph.get_tensor_by_name('detection_scores:0') detection_classes = detection_graph.get_tensor_by_name('detection_classes:0') num_detections = detection_graph.get_tensor_by_name('num_detections:0') # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. (boxes, scores, classes, num) = sess.run( [detection_boxes, detection_scores, detection_classes, num_detections], feed_dict={image_tensor: image_np_expanded}) # Visualization of the results of a detection. if item_lost: image_np = vis_util.lost_item_mode(image_np, np.squeeze(boxes), np.squeeze(classes).astype(np.int32), np.squeeze(scores), category_index, use_normalized_coordinates=True, line_thickness=8, marker=tmarker) cv2.imwrite(r'C:/Users/user/Desktop/Calhacks/TensorFlow-Object-Detection-API-On-Live-Video-Feed-master/models/object_detection/data_images/img_'+str(counter)+'.jpg', image_np) counter+=1 else: image_np = vis_util.visualize_boxes_and_labels_on_image_array( image_np, np.squeeze(boxes), np.squeeze(classes).astype(np.int32), np.squeeze(scores), category_index, use_normalized_coordinates=True, line_thickness=8) cv2.imwrite(r'C:/Users/user/Desktop/Calhacks/TensorFlow-Object-Detection-API-On-Live-Video-Feed-master/models/object_detection/data_images/img_'+str(counter)+'.jpg', image_np) counter+=1 VideoFileOutput.write(image_np) cv2.imshow('live_detection2',image_np) if cv2.waitKey(25) & 0xFF==ord('q'): break cv2.destroyAllWindows() cap.release() # # content # token_list=['handbag', 'bottle','skateboard', 'toothbrush', 'teddy bear', 'cell phone', 'keyboard', 'laptop', 'cake'] # indexlist=[] # for i in token_list: # for j in range(len(content)): # if i==content[j]: # indexlist.append(j) # indexlist # while True: # f = open('output.txt', 'r') # x = [x.replace("\n", "") for x in f.readlines()] # "just" in x[-1] ###Output _____no_output_____
colabs/manual.ipynb
###Markdown 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play. ###Code !pip install git+https://github.com/google/starthinker ###Output _____no_output_____ ###Markdown 2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play. ###Code CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT) ###Output _____no_output_____ ###Markdown 3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play. ###Code CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS) ###Output _____no_output_____ ###Markdown 4. Enter Test Script ParametersUsed by tests. 1. This should be called by the tests scripts only. 1. When run will generate a say hello log.Modify the values below for your use case, can be done multiple times, then click play. ###Code FIELDS = { 'auth_read': 'user', # Credentials used for reading data. } print("Parameters Set To: %s" % FIELDS) ###Output _____no_output_____ ###Markdown 5. Execute Test ScriptThis does NOT need to be modified unless you are changing the recipe, click play. ###Code from starthinker.util.project import project from starthinker.script.parse import json_set_fields USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'hello': { 'auth': 'user', 'hour': [ ], 'say': 'Hello Manual', 'sleep': 0 } } ] json_set_fields(TASKS, FIELDS) project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True) project.execute(_force=True) ###Output _____no_output_____ ###Markdown 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play. ###Code !pip install git+https://github.com/google/starthinker ###Output _____no_output_____ ###Markdown 2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play. ###Code CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT) ###Output _____no_output_____ ###Markdown 3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play. ###Code CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS) ###Output _____no_output_____ ###Markdown 4. Enter Test Script ParametersUsed by tests. 1. This should be called by the tests scripts only. 1. When run will generate a say hello log.Modify the values below for your use case, can be done multiple times, then click play. ###Code FIELDS = { 'auth_read': 'user', # Credentials used for reading data. } print("Parameters Set To: %s" % FIELDS) ###Output _____no_output_____ ###Markdown 5. Execute Test ScriptThis does NOT need to be modified unles you are changing the recipe, click play. ###Code from starthinker.util.project import project from starthinker.script.parse import json_set_fields, json_expand_includes USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'hello': { 'auth': 'user', 'hour': [ ], 'say': 'Hello Manual', 'sleep': 0 } } ] json_set_fields(TASKS, FIELDS) json_expand_includes(TASKS) project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True) project.execute() ###Output _____no_output_____ ###Markdown 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play. ###Code !pip install git+https://github.com/google/starthinker ###Output _____no_output_____ ###Markdown 2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play. ###Code CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT) ###Output _____no_output_____ ###Markdown 3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play. ###Code CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS) ###Output _____no_output_____ ###Markdown 4. Enter Test Script ParametersUsed by tests. 1. This should be called by the tests scripts only. 1. When run will generate a say hello log.Modify the values below for your use case, can be done multiple times, then click play. ###Code FIELDS = { 'auth_read': 'user', # Credentials used for reading data. } print("Parameters Set To: %s" % FIELDS) ###Output _____no_output_____ ###Markdown 5. Execute Test ScriptThis does NOT need to be modified unles you are changing the recipe, click play. ###Code from starthinker.util.project import project from starthinker.script.parse import json_set_fields USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'hello': { 'auth': 'user', 'hour': [ ], 'say': 'Hello Manual', 'sleep': 0 } } ] json_set_fields(TASKS, FIELDS) project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True) project.execute(_force=True) ###Output _____no_output_____ ###Markdown 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play. ###Code !pip install git+https://github.com/google/starthinker ###Output _____no_output_____ ###Markdown 2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play. ###Code CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT) ###Output _____no_output_____ ###Markdown 3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play. ###Code CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS) ###Output _____no_output_____ ###Markdown 4. Enter Test Script ParametersUsed by tests. 1. This should be called by the tests scripts only. 1. When run will generate a say hello log.Modify the values below for your use case, can be done multiple times, then click play. ###Code FIELDS = { 'auth_read': 'user', # Credentials used for reading data. } print("Parameters Set To: %s" % FIELDS) ###Output _____no_output_____ ###Markdown 5. Execute Test ScriptThis does NOT need to be modified unles you are changing the recipe, click play. ###Code from starthinker.util.project import project from starthinker.script.parse import json_set_fields USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'hello': { 'auth': 'user', 'hour': [ ], 'say': 'Hello Manual', 'sleep': 0 } } ] json_set_fields(TASKS, FIELDS) project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True) project.execute(_force=True) ###Output _____no_output_____ ###Markdown 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play. ###Code !pip install git+https://github.com/google/starthinker ###Output _____no_output_____ ###Markdown 2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play. ###Code CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT) ###Output _____no_output_____ ###Markdown 3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play. ###Code CLIENT_CREDENTIALS = 'PASTE CLIENT CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS) ###Output _____no_output_____ ###Markdown 4. Enter Test Script ParametersUsed by tests. 1. This should be called by the tests scripts only. 1. When run will generate a say hello log.Modify the values below for your use case, can be done multiple times, then click play. ###Code FIELDS = { 'auth_read': 'user', # Credentials used for reading data. } print("Parameters Set To: %s" % FIELDS) ###Output _____no_output_____ ###Markdown 5. Execute Test ScriptThis does NOT need to be modified unless you are changing the recipe, click play. ###Code from starthinker.util.configuration import Configuration from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'hello': { 'auth': 'user', 'hour': [ ], 'say': 'Hello Manual', 'sleep': 0 } } ] json_set_fields(TASKS, FIELDS) execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True) ###Output _____no_output_____ ###Markdown Test ScriptUsed by tests. LicenseCopyright 2020 Google LLC,Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. DisclaimerThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.This code generated (see starthinker/scripts for possible source): - **Command**: "python starthinker_ui/manage.py colab" - **Command**: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play. ###Code !pip install git+https://github.com/google/starthinker ###Output _____no_output_____ ###Markdown 2. Set ConfigurationThis code is required to initialize the project. Fill in required fields and press play.1. If the recipe uses a Google Cloud Project: - Set the configuration **project** value to the project identifier from [these instructions](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md).1. If the recipe has **auth** set to **user**: - If you have user credentials: - Set the configuration **user** value to your user credentials JSON. - If you DO NOT have user credentials: - Set the configuration **client** value to [downloaded client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md).1. If the recipe has **auth** set to **service**: - Set the configuration **service** value to [downloaded service credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_service.md). ###Code from starthinker.util.configuration import Configuration CONFIG = Configuration( project="", client={}, service={}, user="/content/user.json", verbose=True ) ###Output _____no_output_____ ###Markdown 3. Enter Test Script Recipe Parameters 1. This should be called by the tests scripts only. 1. When run will generate a say hello log.Modify the values below for your use case, can be done multiple times, then click play. ###Code FIELDS = { 'auth_read': 'user', # Credentials used for reading data. } print("Parameters Set To: %s" % FIELDS) ###Output _____no_output_____ ###Markdown 4. Execute Test ScriptThis does NOT need to be modified unless you are changing the recipe, click play. ###Code from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'hello': { 'auth': 'user', 'hour': [ ], 'say': 'Hello Manual', 'sleep': 0 } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True) ###Output _____no_output_____ ###Markdown 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play. ###Code !pip install git+https://github.com/google/starthinker ###Output _____no_output_____ ###Markdown 2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play. ###Code CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT) ###Output _____no_output_____ ###Markdown 3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play. ###Code CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS) ###Output _____no_output_____ ###Markdown 4. Execute Test ScriptThis does NOT need to be modified unles you are changing the recipe, click play. ###Code from starthinker.util.project import project from starthinker.script.parse import json_set_fields USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'hello': { 'auth': 'user', 'hour': [ ], 'say': 'Hello Manual', 'sleep': 0 } } ] project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True) project.execute() ###Output _____no_output_____ ###Markdown 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play. ###Code !pip install git+https://github.com/google/starthinker ###Output _____no_output_____ ###Markdown 2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play. ###Code CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT) ###Output _____no_output_____ ###Markdown 3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play. ###Code CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS) ###Output _____no_output_____ ###Markdown 4. Enter Test Script ParametersUsed by tests. 1. This should be called by the tests scripts only. 1. When run will generate a say hello log.Modify the values below for your use case, can be done multiple times, then click play. ###Code FIELDS = { 'auth_read': 'user', # Credentials used for reading data. } print("Parameters Set To: %s" % FIELDS) ###Output _____no_output_____ ###Markdown 5. Execute Test ScriptThis does NOT need to be modified unles you are changing the recipe, click play. ###Code from starthinker.util.project import project from starthinker.script.parse import json_set_fields USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'hello': { 'auth': 'user', 'say': 'Hello Manual', 'hour': [ ], 'sleep': 0 } } ] json_set_fields(TASKS, FIELDS) project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True) project.execute(_force=True) ###Output _____no_output_____ ###Markdown 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play. ###Code !pip install git+https://github.com/google/starthinker ###Output _____no_output_____ ###Markdown 2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play. ###Code CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT) ###Output _____no_output_____ ###Markdown 3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play. ###Code CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS) ###Output _____no_output_____ ###Markdown 4. Enter Test Script ParametersUsed by tests. 1. This should be called by the tests scripts only. 1. When run will generate a say hello log.Modify the values below for your use case, can be done multiple times, then click play. ###Code FIELDS = { 'auth_read': 'user', # Credentials used for reading data. } print("Parameters Set To: %s" % FIELDS) ###Output _____no_output_____ ###Markdown 5. Execute Test ScriptThis does NOT need to be modified unless you are changing the recipe, click play. ###Code from starthinker.util.configuration import Configuration from starthinker.util.configuration import commandline_parser from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'hello': { 'auth': 'user', 'hour': [ ], 'say': 'Hello Manual', 'sleep': 0 } } ] json_set_fields(TASKS, FIELDS) execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True) ###Output _____no_output_____
YahooFinance/YahooFinance_Display_chart_from_ticker.ipynb
###Markdown YahooFinance - Display chart from ticker **Tags:** yahoofinance trading plotly naas_drivers **Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/) With this template, you can get data from any ticker available in [Yahoo finance](https://finance.yahoo.com/quote/TSLA/). Input Import libraries ###Code from naas_drivers import yahoofinance, plotly ###Output _____no_output_____ ###Markdown Input parameters👉 Here you can change the ticker, timeframe and add moving averages analysiss ###Code ticker = "TSLA" date_from = -365 date_to = "today" interval = '1d' moving_averages = [20, 50] ###Output _____no_output_____ ###Markdown Model Get dataset from Yahoo Finance ###Code df_yahoo = yahoofinance.get(ticker, date_from=date_from, date_to=date_to, interval=interval, moving_averages=moving_averages) ###Output _____no_output_____ ###Markdown Output Display chart ###Code chart = plotly.linechart(df_yahoo, x="Date", y=["Close", "MA20", "MA50"], showlegend=True, title=f"{ticker} stock as of today") ###Output _____no_output_____ ###Markdown YahooFinance - Display chart from ticker **Tags:** yahoofinance trading plotly naas_drivers With this template, you can get data from any ticker available in [Yahoo finance](https://finance.yahoo.com/quote/TSLA/). Input Import libraries ###Code from naas_drivers import yahoofinance, plotly ###Output _____no_output_____ ###Markdown Input parameters👉 Here you can change the ticker, timeframe and add moving averages analysiss ###Code ticker = "TSLA" date_from = -365 date_to = "today" interval = '1d' moving_averages = [20, 50] ###Output _____no_output_____ ###Markdown Model Get dataset from Yahoo Finance ###Code df_yahoo = yahoofinance.get(ticker, date_from=date_from, date_to=date_to, interval=interval, moving_averages=moving_averages) ###Output _____no_output_____ ###Markdown Output Display chart ###Code chart = plotly.linechart(df_yahoo, x="Date", y=["Close", "MA20", "MA50"], showlegend=True, title=f"{ticker} stock as of today") ###Output _____no_output_____ ###Markdown YahooFinance - Display chart from ticker **Tags:** yahoofinance trading plotly naas_drivers With this template, you can get data from any ticker available in [Yahoo finance](https://finance.yahoo.com/quote/TSLA/). Input Import libraries ###Code from naas_drivers import yahoofinance, plotly ###Output _____no_output_____ ###Markdown Input parameters👉 Here you can change the ticker, timeframe and add moving averages analysiss ###Code ticker = "TSLA" date_from = -365 date_to = "today" interval = '1d' moving_averages = [20, 50] ###Output _____no_output_____ ###Markdown Model Get dataset from Yahoo Finance ###Code df_yahoo = yahoofinance.get(ticker, date_from=date_from, date_to=date_to, interval=interval, moving_averages=moving_averages) ###Output _____no_output_____ ###Markdown Output Display chart ###Code chart = plotly.linechart(df_yahoo, x="Date", y=["Close", "MA20", "MA50"], showlegend=True, title=f"{ticker} stock as of today") ###Output _____no_output_____ ###Markdown YahooFinance - Display chart from ticker **Tags:** yahoofinance trading plotly naas_drivers investors snippet image **Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/) With this template, you can get data from any ticker available in [Yahoo finance](https://finance.yahoo.com/quote/TSLA/). Input Import libraries ###Code from naas_drivers import yahoofinance, plotly ###Output _____no_output_____ ###Markdown Input parameters👉 Here you can change the ticker, timeframe and add moving averages analysiss ###Code ticker = "TSLA" date_from = -365 date_to = "today" interval = '1d' moving_averages = [20, 50] ###Output _____no_output_____ ###Markdown Model Get dataset from Yahoo Finance ###Code df_yahoo = yahoofinance.get(ticker, date_from=date_from, date_to=date_to, interval=interval, moving_averages=moving_averages) ###Output _____no_output_____ ###Markdown Output Display chart ###Code chart = plotly.linechart(df_yahoo, x="Date", y=["Close", "MA20", "MA50"], showlegend=True, title=f"{ticker} stock as of today") ###Output _____no_output_____ ###Markdown YahooFinance - Display chart from ticker **Tags:** yahoofinance trading plotly naas_drivers investors snippet image **Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/) With this template, you can get data from any ticker available in [Yahoo finance](https://finance.yahoo.com/quote/TSLA/). Input Import libraries ###Code from naas_drivers import yahoofinance, plotly ###Output _____no_output_____ ###Markdown Input parameters👉 Here you can change the ticker, timeframe and add moving averages analysiss ###Code ticker = "TSLA" date_from = -365 date_to = "today" interval = '1d' moving_averages = [20, 50] ###Output _____no_output_____ ###Markdown Model Get dataset from Yahoo Finance ###Code df_yahoo = yahoofinance.get(ticker, date_from=date_from, date_to=date_to, interval=interval, moving_averages=moving_averages) ###Output _____no_output_____ ###Markdown Output Display chart ###Code chart = plotly.linechart(df_yahoo, x="Date", y=["Close", "MA20", "MA50"], showlegend=True, title=f"{ticker} stock as of today") ###Output _____no_output_____
how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb
###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.18.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).| TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Data and Forecasting Configurations](data)1. [Train](train)1. [Generate and Evaluate the Forecast](forecast)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Generate the forecast and compute the out-of-sample accuracy metrics1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast with lagging features Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.36.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = "automl-forecasting-energydemand" # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output["Subscription ID"] = ws.subscription_id output["Workspace"] = ws.name output["Resource Group"] = ws.resource_group output["Location"] = ws.location output["Run History Name"] = experiment_name pd.set_option("display.max_colwidth", -1) outputDf = pd.DataFrame(data=output, index=[""]) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print("Found existing cluster, use it.") except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration( vm_size="STANDARD_DS12_V2", max_nodes=6 ) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = "demand" time_column_name = "timeStamp" dataset = Dataset.Tabular.from_delimited_files( path="https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv" ).with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq="H", # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig( task="forecasting", primary_metric="normalized_root_mean_squared_error", blocked_models=["ExtremeRandomTrees", "AutoArima", "Prophet"], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters, ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps["timeseriestransformer"].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps[ "timeseriestransformer" ].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute. ###Code test_experiment = Experiment(ws, experiment_name + "_inference") ###Output _____no_output_____ ###Markdown Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. ###Code from run_forecast import run_remote_inference remote_run_infer = run_remote_inference( test_experiment=test_experiment, compute_target=compute_target, train_run=best_run, test_dataset=test, target_column_name=target_column_name, ) remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine remote_run_infer.download_file("outputs/predictions.csv", "predictions.csv") ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals). ###Code # load forecast data frame fcst_df = pd.read_csv("predictions.csv", parse_dates=[time_column_name]) fcst_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_df[target_column_name], y_pred=fcst_df["predicted"], metrics=list(constants.Metric.SCALAR_REGRESSION_SET), ) print("[Test data scores]\n") for key, value in scores.items(): print("{}: {:.3f}".format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_df[target_column_name], fcst_df["predicted"], color="b") test_test = plt.scatter( fcst_df[target_column_name], fcst_df[target_column_name], color="g" ) plt.legend( (test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8 ) plt.show() ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4, ) automl_config = AutoMLConfig( task="forecasting", primary_metric="normalized_root_mean_squared_error", blocked_models=[ "ElasticNet", "ExtremeRandomTrees", "GradientBoosting", "XGBoostRegressor", "ExtremeRandomTrees", "AutoArima", "Prophet", ], # These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters, ) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code test_experiment_advanced = Experiment(ws, experiment_name + "_inference_advanced") advanced_remote_run_infer = run_remote_inference( test_experiment=test_experiment_advanced, compute_target=compute_target, train_run=best_run_lags, test_dataset=test, target_column_name=target_column_name, inference_folder="./forecast_advanced", ) advanced_remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine advanced_remote_run_infer.download_file( "outputs/predictions.csv", "predictions_advanced.csv" ) fcst_adv_df = pd.read_csv("predictions_advanced.csv", parse_dates=[time_column_name]) fcst_adv_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_adv_df[target_column_name], y_pred=fcst_adv_df["predicted"], metrics=list(constants.Metric.SCALAR_REGRESSION_SET), ) print("[Test data scores]\n") for key, value in scores.items(): print("{}: {:.3f}".format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter( fcst_adv_df[target_column_name], fcst_adv_df["predicted"], color="b" ) test_test = plt.scatter( fcst_adv_df[target_column_name], fcst_adv_df[target_column_name], color="g" ) plt.legend( (test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8 ) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.29.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq='H' # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.15.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).| TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Data and Forecasting Configurations](data)1. [Train](train)1. [Generate and Evaluate the Forecast](forecast)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Generate the forecast and compute the out-of-sample accuracy metrics1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast with lagging features Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.32.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq='H' # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute. ###Code test_experiment = Experiment(ws, experiment_name + "_inference") ###Output _____no_output_____ ###Markdown Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. ###Code from run_forecast import run_remote_inference remote_run_infer = run_remote_inference(test_experiment=test_experiment, compute_target=compute_target, train_run=best_run, test_dataset=test, target_column_name=target_column_name) remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine remote_run_infer.download_file('outputs/predictions.csv', 'predictions.csv') ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals). ###Code # load forecast data frame fcst_df = pd.read_csv('predictions.csv', parse_dates=[time_column_name]) fcst_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_df[target_column_name], y_pred=fcst_df['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_df[target_column_name], fcst_df['predicted'], color='b') test_test = plt.scatter(fcst_df[target_column_name], fcst_df[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code test_experiment_advanced = Experiment(ws, experiment_name + "_inference_advanced") advanced_remote_run_infer = run_remote_inference(test_experiment=test_experiment_advanced, compute_target=compute_target, train_run=best_run_lags, test_dataset=test, target_column_name=target_column_name, inference_folder='./forecast_advanced') advanced_remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine advanced_remote_run_infer.download_file('outputs/predictions.csv', 'predictions_advanced.csv') fcst_adv_df = pd.read_csv('predictions_advanced.csv', parse_dates=[time_column_name]) fcst_adv_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_adv_df[target_column_name], y_pred=fcst_adv_df['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_adv_df[target_column_name], fcst_adv_df['predicted'], color='b') test_test = plt.scatter(fcst_adv_df[target_column_name], fcst_adv_df[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used to forecast a single time-series in the energy demand application area. Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.Notebook synopsis:1. Creating an Experiment in an existing Workspace2. Configuration and local run of AutoML for a simple time-series model3. View engineered features and prediction results4. Configuration and local run of AutoML for a time-series model with lag and rolling window features5. Estimate feature importance Setup ###Code import azureml.core import pandas as pd import numpy as np import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ###Output _____no_output_____ ###Markdown As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataWe will use energy consumption data from New York City for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. Pandas CSV reader is used to read the file into memory. Special attention is given to the "timeStamp" column in the data since it contains text which should be parsed as datetime-type objects. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() ###Output _____no_output_____ ###Markdown We must now define the schema of this dataset. Every time-series must have a time column and a target. The target quantity is what will be eventually forecasted by a trained model. In this case, the target is the "demand" column. The other columns, "temp" and "precip," are implicitly designated as features. ###Code # Dataset schema time_column_name = 'timeStamp' target_column_name = 'demand' ###Output _____no_output_____ ###Markdown Forecast HorizonIn addition to the data schema, we must also specify the forecast horizon. A forecast horizon is a time span into the future (or just beyond the latest date in the training data) where forecasts of the target quantity are needed. Choosing a forecast horizon is application specific, but a rule-of-thumb is that **the horizon should be the time-frame where you need actionable decisions based on the forecast.** The horizon usually has a strong relationship with the frequency of the time-series data, that is, the sampling interval of the target quantity and the features. For instance, the NYC energy demand data has an hourly frequency. A decision that requires a demand forecast to the hour is unlikely to be made weeks or months in advance, particularly if we expect weather to be a strong determinant of demand. We may have fairly accurate meteorological forecasts of the hourly temperature and precipitation on a the time-scale of a day or two, however.Given the above discussion, we generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so the user should consider carefully how they set this value. If a long horizon forecast really is necessary, it may be good practice to aggregate the series to a coarser time scale. Forecast horizons in AutoML are given as integer multiples of the time-series frequency. In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown Split the data into train and test setsWe now split the data into a train and a test set so that we may evaluate model performance. We note that the tail of the dataset contains a large number of NA values in the target column, so we designate the test set as the 48 hour window ending on the latest date of known energy demand. ###Code # Find time point to split on latest_known_time = data[~pd.isnull(data[target_column_name])][time_column_name].max() split_time = latest_known_time - pd.Timedelta(hours=max_horizon) # Split into train/test sets X_train = data[data[time_column_name] <= split_time] X_test = data[(data[time_column_name] > split_time) & (data[time_column_name] <= latest_known_time)] # Move the target values into their own arrays y_train = X_train.pop(target_column_name).values y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown TrainWe now instantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. For forecasting tasks, we must provide extra configuration related to the time-series data schema and forecasting context. Here, only the name of the time column and the maximum forecast horizon are needed. Other settings are described below:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], targets values.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.| ###Code time_series_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon } automl_config = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima'], iterations=10, iteration_timeout_minutes=5, X=X_train, y=y_train, n_cross_validations=3, verbosity = logging.INFO, **time_series_settings) ###Output _____no_output_____ ###Markdown Submitting the configuration will start a new run in this experiment. For local runs, the execution is synchronous. Depending on the data and number of iterations, this can run for a while. Parameters controlling concurrency may speed up the process, depending on your hardware.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown View the engineered names for featurized dataBelow we display the engineered feature names generated for the featurized data using the time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelFor forecasting, we will use the `forecast` function instead of the `predict` function. There are two reasons for this.We need to pass the recent values of the target variable `y`, whereas the scikit-compatible `predict` function only takes the non-target variables `X`. In our case, the test data immediately follows the training data, and we fill the `y` variable with `NaN`. The `NaN` serves as a question mark for the forecaster to fill with the actuals. Using the forecast function will produce forecasts using the shortest possible forecast horizon. The last time at which a definite (non-NaN) value is seen is the _forecast origin_ - the last time when the value of the target is known. Using the `predict` method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. ###Code # Replace ALL values in y_pred by NaN. # The forecast origin will be at the beginning of the first forecast period # (which is the same time as the end of the last training period). y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_fcst, X_trans = fitted_model.forecast(X_test, y_query) # limit the evaluation to data where y_test has actuals def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'): """ Demonstrates how to get the output aligned to the inputs using pandas indexes. Helps understand what happened if the output's shape differs from the input shape, or if the data got re-sorted by time and grain during forecasting. Typical causes of misalignment are: * we predicted some periods that were missing in actuals -> drop from eval * model was asked to predict past max_horizon -> increase max horizon * data at start of X_test was needed for lags -> provide previous periods """ df_fcst = pd.DataFrame({predicted_column_name : y_predicted}) # y and X outputs are aligned by forecast() function contract df_fcst.index = X_trans.index # align original X_test to y_test X_test_full = X_test.copy() X_test_full[target_column_name] = y_test # X_test_full's does not include origin, so reset for merge df_fcst.reset_index(inplace=True) X_test_full = X_test_full.reset_index().drop(columns='index') together = df_fcst.merge(X_test_full, how='right') # drop rows where prediction or actuals are nan # happens because of missing actuals # or at edges of time due to lags/rolling windows clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)] return(clean) df_all = align_outputs(y_fcst, X_trans, X_test, y_test) df_all.head() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Calculate accuracy metricsFinally, we calculate some accuracy metrics for the forecast and plot the predictions vs. the actuals over the time range in the test set. ###Code def MAPE(actual, pred): """ Calculate mean absolute percentage error. Remove NA and values where actual is close to zero """ not_na = ~(np.isnan(actual) | np.isnan(pred)) not_zero = ~np.isclose(actual, 0.0) actual_safe = actual[not_na & not_zero] pred_safe = pred[not_na & not_zero] APE = 100*np.abs((actual_safe - pred_safe)/actual_safe) return np.mean(APE) print("Simple forecasting model") rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_all[target_column_name], df_all['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted'])) # Plot outputs %matplotlib inline pred, = plt.plot(df_all[time_column_name], df_all['predicted'], color='b') actual, = plt.plot(df_all[time_column_name], df_all[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.title('Prediction vs. Actual Time-Series') plt.show() ###Output _____no_output_____ ###Markdown The distribution looks a little heavy tailed: we underestimate the excursions of the extremes. A normal-quantile transform of the target might help, but let's first try using some past data with the lags and rolling window transforms. Using lags and rolling window features We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation.Now that we configured target lags, that is the previous values of the target variables, and the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code time_series_settings_with_lags = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4 } automl_config_lags = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', blacklist_models=['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor'], iterations=10, iteration_timeout_minutes=10, X=X_train, y=y_train, n_cross_validations=3, verbosity=logging.INFO, **time_series_settings_with_lags) ###Output _____no_output_____ ###Markdown We now start a new local run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code local_run_lags = experiment.submit(automl_config_lags, show_output=True) best_run_lags, fitted_model_lags = local_run_lags.get_output() y_fcst_lags, X_trans_lags = fitted_model_lags.forecast(X_test, y_query) df_lags = align_outputs(y_fcst_lags, X_trans_lags, X_test, y_test) df_lags.head() X_trans_lags print("Forecasting model with lags") rmse = np.sqrt(mean_squared_error(df_lags[target_column_name], df_lags['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_lags[target_column_name], df_lags['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_lags[target_column_name], df_lags['predicted'])) # Plot outputs %matplotlib inline pred, = plt.plot(df_lags[time_column_name], df_lags['predicted'], color='b') actual, = plt.plot(df_lags[time_column_name], df_lags[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown What features matter for the forecast?The following steps will allow you to compute and visualize engineered feature importance based on your test data for forecasting. Setup the model explanations for AutoML modelsThe *fitted_model* can generate the following which will be used for getting the engineered and raw feature explanations using *automl_setup_model_explanations*:-1. Featurized data from train samples/test samples 2. Gather engineered and raw feature name lists3. Find the classes in your labeled column in classification scenariosThe *automl_explainer_setup_obj* contains all the structures from above list. ###Code from azureml.train.automl.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train.copy(), X_test=X_test.copy(), y=y_train, task='forecasting') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the *MimicWrapper* from *azureml.explain.model* package. The *MimicWrapper* can be initialized with fields in *automl_explainer_setup_obj*, your workspace and a LightGBM model which acts as a surrogate model to explain the AutoML model (*fitted_model* here). The *MimicWrapper* also takes the *best_run* object where the raw and engineered explanations will be uploaded. ###Code from azureml.explain.model.mimic.models.lightgbm_model import LGBMExplainableModel from azureml.explain.model.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel, init_dataset=automl_explainer_setup_obj.X_transform, run=best_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map]) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe *explain()* method in *MimicWrapper* can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use *ExplanationDashboard* to view the dash board visualization of the feature importance values of the generated engineered features by AutoML featurizers. ###Code engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) from azureml.contrib.explain.model.visualize import ExplanationDashboard ExplanationDashboard(engineered_explanations, automl_explainer_setup_obj.automl_estimator, automl_explainer_setup_obj.X_test_transform) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe *explain()* method in *MimicWrapper* can be again called with the transformed test samples and setting *get_raw* to *True* to get the feature importance for the raw features. You can also use *ExplanationDashboard* to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform) print(raw_explanations.get_feature_importance_dict()) from azureml.contrib.explain.model.visualize import ExplanationDashboard ExplanationDashboard(raw_explanations, automl_explainer_setup_obj.automl_pipeline, automl_explainer_setup_obj.X_test_raw) ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used to forecast a single time-series in the energy demand application area. Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.Notebook synopsis:1. Creating an Experiment in an existing Workspace2. Configuration and local run of AutoML for a simple time-series model3. View engineered features and prediction results4. Configuration and local run of AutoML for a time-series model with lag and rolling window features5. Estimate feature importance Setup ###Code import azureml.core import pandas as pd import numpy as np import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ###Output _____no_output_____ ###Markdown As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' # project folder project_folder = './sample_projects/automl-local-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataWe will use energy consumption data from New York City for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. Pandas CSV reader is used to read the file into memory. Special attention is given to the "timeStamp" column in the data since it contains text which should be parsed as datetime-type objects. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() ###Output _____no_output_____ ###Markdown We must now define the schema of this dataset. Every time-series must have a time column and a target. The target quantity is what will be eventually forecasted by a trained model. In this case, the target is the "demand" column. The other columns, "temp" and "precip," are implicitly designated as features. ###Code # Dataset schema time_column_name = 'timeStamp' target_column_name = 'demand' ###Output _____no_output_____ ###Markdown Forecast HorizonIn addition to the data schema, we must also specify the forecast horizon. A forecast horizon is a time span into the future (or just beyond the latest date in the training data) where forecasts of the target quantity are needed. Choosing a forecast horizon is application specific, but a rule-of-thumb is that **the horizon should be the time-frame where you need actionable decisions based on the forecast.** The horizon usually has a strong relationship with the frequency of the time-series data, that is, the sampling interval of the target quantity and the features. For instance, the NYC energy demand data has an hourly frequency. A decision that requires a demand forecast to the hour is unlikely to be made weeks or months in advance, particularly if we expect weather to be a strong determinant of demand. We may have fairly accurate meteorological forecasts of the hourly temperature and precipitation on a the time-scale of a day or two, however.Given the above discussion, we generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so the user should consider carefully how they set this value. If a long horizon forecast really is necessary, it may be good practice to aggregate the series to a coarser time scale. Forecast horizons in AutoML are given as integer multiples of the time-series frequency. In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown Split the data into train and test setsWe now split the data into a train and a test set so that we may evaluate model performance. We note that the tail of the dataset contains a large number of NA values in the target column, so we designate the test set as the 48 hour window ending on the latest date of known energy demand. ###Code # Find time point to split on latest_known_time = data[~pd.isnull(data[target_column_name])][time_column_name].max() split_time = latest_known_time - pd.Timedelta(hours=max_horizon) # Split into train/test sets X_train = data[data[time_column_name] <= split_time] X_test = data[(data[time_column_name] > split_time) & (data[time_column_name] <= latest_known_time)] # Move the target values into their own arrays y_train = X_train.pop(target_column_name).values y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown TrainWe now instantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. For forecasting tasks, we must provide extra configuration related to the time-series data schema and forecasting context. Here, only the name of the time column and the maximum forecast horizon are needed. Other settings are described below:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], targets values.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. ###Code time_series_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon } automl_config = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees'], iterations=10, iteration_timeout_minutes=5, X=X_train, y=y_train, n_cross_validations=3, path=project_folder, verbosity = logging.INFO, **time_series_settings) ###Output _____no_output_____ ###Markdown Submitting the configuration will start a new run in this experiment. For local runs, the execution is synchronous. Depending on the data and number of iterations, this can run for a while. Parameters controlling concurrency may speed up the process, depending on your hardware.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown View the engineered names for featurized dataBelow we display the engineered feature names generated for the featurized data using the time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelFor forecasting, we will use the `forecast` function instead of the `predict` function. There are two reasons for this.We need to pass the recent values of the target variable `y`, whereas the scikit-compatible `predict` function only takes the non-target variables `X`. In our case, the test data immediately follows the training data, and we fill the `y` variable with `NaN`. The `NaN` serves as a question mark for the forecaster to fill with the actuals. Using the forecast function will produce forecasts using the shortest possible forecast horizon. The last time at which a definite (non-NaN) value is seen is the _forecast origin_ - the last time when the value of the target is known. Using the `predict` method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. ###Code # Replace ALL values in y_pred by NaN. # The forecast origin will be at the beginning of the first forecast period # (which is the same time as the end of the last training period). y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_fcst, X_trans = fitted_model.forecast(X_test, y_query) # limit the evaluation to data where y_test has actuals def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'): """ Demonstrates how to get the output aligned to the inputs using pandas indexes. Helps understand what happened if the output's shape differs from the input shape, or if the data got re-sorted by time and grain during forecasting. Typical causes of misalignment are: * we predicted some periods that were missing in actuals -> drop from eval * model was asked to predict past max_horizon -> increase max horizon * data at start of X_test was needed for lags -> provide previous periods """ df_fcst = pd.DataFrame({predicted_column_name : y_predicted}) # y and X outputs are aligned by forecast() function contract df_fcst.index = X_trans.index # align original X_test to y_test X_test_full = X_test.copy() X_test_full[target_column_name] = y_test # X_test_full's does not include origin, so reset for merge df_fcst.reset_index(inplace=True) X_test_full = X_test_full.reset_index().drop(columns='index') together = df_fcst.merge(X_test_full, how='right') # drop rows where prediction or actuals are nan # happens because of missing actuals # or at edges of time due to lags/rolling windows clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)] return(clean) df_all = align_outputs(y_fcst, X_trans, X_test, y_test) df_all.head() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Calculate accuracy metricsFinally, we calculate some accuracy metrics for the forecast and plot the predictions vs. the actuals over the time range in the test set. ###Code def MAPE(actual, pred): """ Calculate mean absolute percentage error. Remove NA and values where actual is close to zero """ not_na = ~(np.isnan(actual) | np.isnan(pred)) not_zero = ~np.isclose(actual, 0.0) actual_safe = actual[not_na & not_zero] pred_safe = pred[not_na & not_zero] APE = 100*np.abs((actual_safe - pred_safe)/actual_safe) return np.mean(APE) print("Simple forecasting model") rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_all[target_column_name], df_all['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted'])) # Plot outputs %matplotlib inline pred, = plt.plot(df_all[time_column_name], df_all['predicted'], color='b') actual, = plt.plot(df_all[time_column_name], df_all[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.title('Prediction vs. Actual Time-Series') plt.show() ###Output _____no_output_____ ###Markdown The distribution looks a little heavy tailed: we underestimate the excursions of the extremes. A normal-quantile transform of the target might help, but let's first try using some past data with the lags and rolling window transforms. Using lags and rolling window features We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation.Now that we configured target lags, that is the previous values of the target variables, and the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code time_series_settings_with_lags = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4 } automl_config_lags = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', blacklist_models=['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor'], iterations=10, iteration_timeout_minutes=10, X=X_train, y=y_train, n_cross_validations=3, path=project_folder, verbosity=logging.INFO, **time_series_settings_with_lags) ###Output _____no_output_____ ###Markdown We now start a new local run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code local_run_lags = experiment.submit(automl_config_lags, show_output=True) best_run_lags, fitted_model_lags = local_run_lags.get_output() y_fcst_lags, X_trans_lags = fitted_model_lags.forecast(X_test, y_query) df_lags = align_outputs(y_fcst_lags, X_trans_lags, X_test, y_test) df_lags.head() X_trans_lags print("Forecasting model with lags") rmse = np.sqrt(mean_squared_error(df_lags[target_column_name], df_lags['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_lags[target_column_name], df_lags['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_lags[target_column_name], df_lags['predicted'])) # Plot outputs %matplotlib inline pred, = plt.plot(df_lags[time_column_name], df_lags['predicted'], color='b') actual, = plt.plot(df_lags[time_column_name], df_lags[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown What features matter for the forecast? ###Code from azureml.train.automl.automlexplainer import explain_model # feature names are everything in the transformed data except the target features = X_trans_lags.columns[:-1] expl = explain_model(fitted_model_lags, X_train.copy(), X_test.copy(), features=features, best_run=best_run_lags, y_train=y_train) # unpack the tuple shap_values, expected_values, feat_overall_imp, feat_names, per_class_summary, per_class_imp = expl best_run_lags ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.17.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).| TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)1. [Results](Results)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced Results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import AmlCompute from azureml.core.compute import ComputeTarget # Choose a name for your cluster. amlcompute_cluster_name = "aml-compute" found = False # Check if this compute target already exists in the workspace. cts = ws.compute_targets if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute': found = True print('Found existing compute target.') compute_target = cts[amlcompute_cluster_name] if not found: print('Creating a new compute target...') provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_DS12_V2", # for GPU, use "STANDARD_NC6" #vm_priority = 'lowpriority', # optional max_nodes = 6) # Create the cluster.\n", compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config) print('Checking cluster status...') # Can poll for a minimum number of nodes and for a specific timeout. # If no min_node_count is provided, it will use the scale settings for the cluster. compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20) # For a more detailed view of current AmlCompute status, use get_status(). ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe() ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 5), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl/azureml.train.automl.constants.supportedmodels.regression?view=azure-ml-py).||**experiment_timeout_minutes**|Maximum amount of time in minutes that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_minutes parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_minutes=20, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe() y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see notebook on [high frequency forecasting](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-high-frequency/automl-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. experiment_timeout_minutes=20, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Data and Forecasting Configurations](data)1. [Train](train)1. [Generate and Evaluate the Forecast](forecast)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Generate the forecast and compute the out-of-sample accuracy metrics1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast with lagging features Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.35.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq='H' # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute. ###Code test_experiment = Experiment(ws, experiment_name + "_inference") ###Output _____no_output_____ ###Markdown Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. ###Code from run_forecast import run_remote_inference remote_run_infer = run_remote_inference(test_experiment=test_experiment, compute_target=compute_target, train_run=best_run, test_dataset=test, target_column_name=target_column_name) remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine remote_run_infer.download_file('outputs/predictions.csv', 'predictions.csv') ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals). ###Code # load forecast data frame fcst_df = pd.read_csv('predictions.csv', parse_dates=[time_column_name]) fcst_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_df[target_column_name], y_pred=fcst_df['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_df[target_column_name], fcst_df['predicted'], color='b') test_test = plt.scatter(fcst_df[target_column_name], fcst_df[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code test_experiment_advanced = Experiment(ws, experiment_name + "_inference_advanced") advanced_remote_run_infer = run_remote_inference(test_experiment=test_experiment_advanced, compute_target=compute_target, train_run=best_run_lags, test_dataset=test, target_column_name=target_column_name, inference_folder='./forecast_advanced') advanced_remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine advanced_remote_run_infer.download_file('outputs/predictions.csv', 'predictions_advanced.csv') fcst_adv_df = pd.read_csv('predictions_advanced.csv', parse_dates=[time_column_name]) fcst_adv_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_adv_df[target_column_name], y_pred=fcst_adv_df['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_adv_df[target_column_name], fcst_adv_df['predicted'], color='b') test_test = plt.scatter(fcst_adv_df[target_column_name], fcst_adv_df[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used to forecast a single time-series in the energy demand application area. Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.Notebook synopsis:1. Creating an Experiment in an existing Workspace2. Configuration and local run of AutoML for a simple time-series model3. View engineered features and prediction results4. Configuration and local run of AutoML for a time-series model with lag and rolling window features5. Estimate feature importance Setup ###Code import azureml.core import pandas as pd import numpy as np import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ###Output _____no_output_____ ###Markdown As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataWe will use energy consumption data from New York City for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. Pandas CSV reader is used to read the file into memory. Special attention is given to the "timeStamp" column in the data since it contains text which should be parsed as datetime-type objects. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() ###Output _____no_output_____ ###Markdown We must now define the schema of this dataset. Every time-series must have a time column and a target. The target quantity is what will be eventually forecasted by a trained model. In this case, the target is the "demand" column. The other columns, "temp" and "precip," are implicitly designated as features. ###Code # Dataset schema time_column_name = 'timeStamp' target_column_name = 'demand' ###Output _____no_output_____ ###Markdown Forecast HorizonIn addition to the data schema, we must also specify the forecast horizon. A forecast horizon is a time span into the future (or just beyond the latest date in the training data) where forecasts of the target quantity are needed. Choosing a forecast horizon is application specific, but a rule-of-thumb is that **the horizon should be the time-frame where you need actionable decisions based on the forecast.** The horizon usually has a strong relationship with the frequency of the time-series data, that is, the sampling interval of the target quantity and the features. For instance, the NYC energy demand data has an hourly frequency. A decision that requires a demand forecast to the hour is unlikely to be made weeks or months in advance, particularly if we expect weather to be a strong determinant of demand. We may have fairly accurate meteorological forecasts of the hourly temperature and precipitation on a the time-scale of a day or two, however.Given the above discussion, we generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so the user should consider carefully how they set this value. If a long horizon forecast really is necessary, it may be good practice to aggregate the series to a coarser time scale. Forecast horizons in AutoML are given as integer multiples of the time-series frequency. In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown Split the data into train and test setsWe now split the data into a train and a test set so that we may evaluate model performance. We note that the tail of the dataset contains a large number of NA values in the target column, so we designate the test set as the 48 hour window ending on the latest date of known energy demand. ###Code # Find time point to split on latest_known_time = data[~pd.isnull(data[target_column_name])][time_column_name].max() split_time = latest_known_time - pd.Timedelta(hours=max_horizon) # Split into train/test sets X_train = data[data[time_column_name] <= split_time] X_test = data[(data[time_column_name] > split_time) & (data[time_column_name] <= latest_known_time)] # Move the target values into their own arrays y_train = X_train.pop(target_column_name).values y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown TrainWe now instantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. For forecasting tasks, we must provide extra configuration related to the time-series data schema and forecasting context. Here, only the name of the time column and the maximum forecast horizon are needed. Other settings are described below:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], targets values.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.| ###Code time_series_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon } automl_config = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees'], iterations=10, iteration_timeout_minutes=5, X=X_train, y=y_train, n_cross_validations=3, verbosity = logging.INFO, **time_series_settings) ###Output _____no_output_____ ###Markdown Submitting the configuration will start a new run in this experiment. For local runs, the execution is synchronous. Depending on the data and number of iterations, this can run for a while. Parameters controlling concurrency may speed up the process, depending on your hardware.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown View the engineered names for featurized dataBelow we display the engineered feature names generated for the featurized data using the time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelFor forecasting, we will use the `forecast` function instead of the `predict` function. There are two reasons for this.We need to pass the recent values of the target variable `y`, whereas the scikit-compatible `predict` function only takes the non-target variables `X`. In our case, the test data immediately follows the training data, and we fill the `y` variable with `NaN`. The `NaN` serves as a question mark for the forecaster to fill with the actuals. Using the forecast function will produce forecasts using the shortest possible forecast horizon. The last time at which a definite (non-NaN) value is seen is the _forecast origin_ - the last time when the value of the target is known. Using the `predict` method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. ###Code # Replace ALL values in y_pred by NaN. # The forecast origin will be at the beginning of the first forecast period # (which is the same time as the end of the last training period). y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_fcst, X_trans = fitted_model.forecast(X_test, y_query) # limit the evaluation to data where y_test has actuals def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'): """ Demonstrates how to get the output aligned to the inputs using pandas indexes. Helps understand what happened if the output's shape differs from the input shape, or if the data got re-sorted by time and grain during forecasting. Typical causes of misalignment are: * we predicted some periods that were missing in actuals -> drop from eval * model was asked to predict past max_horizon -> increase max horizon * data at start of X_test was needed for lags -> provide previous periods """ df_fcst = pd.DataFrame({predicted_column_name : y_predicted}) # y and X outputs are aligned by forecast() function contract df_fcst.index = X_trans.index # align original X_test to y_test X_test_full = X_test.copy() X_test_full[target_column_name] = y_test # X_test_full's does not include origin, so reset for merge df_fcst.reset_index(inplace=True) X_test_full = X_test_full.reset_index().drop(columns='index') together = df_fcst.merge(X_test_full, how='right') # drop rows where prediction or actuals are nan # happens because of missing actuals # or at edges of time due to lags/rolling windows clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)] return(clean) df_all = align_outputs(y_fcst, X_trans, X_test, y_test) df_all.head() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Calculate accuracy metricsFinally, we calculate some accuracy metrics for the forecast and plot the predictions vs. the actuals over the time range in the test set. ###Code def MAPE(actual, pred): """ Calculate mean absolute percentage error. Remove NA and values where actual is close to zero """ not_na = ~(np.isnan(actual) | np.isnan(pred)) not_zero = ~np.isclose(actual, 0.0) actual_safe = actual[not_na & not_zero] pred_safe = pred[not_na & not_zero] APE = 100*np.abs((actual_safe - pred_safe)/actual_safe) return np.mean(APE) print("Simple forecasting model") rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_all[target_column_name], df_all['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted'])) # Plot outputs %matplotlib inline pred, = plt.plot(df_all[time_column_name], df_all['predicted'], color='b') actual, = plt.plot(df_all[time_column_name], df_all[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.title('Prediction vs. Actual Time-Series') plt.show() ###Output _____no_output_____ ###Markdown The distribution looks a little heavy tailed: we underestimate the excursions of the extremes. A normal-quantile transform of the target might help, but let's first try using some past data with the lags and rolling window transforms. Using lags and rolling window features We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation.Now that we configured target lags, that is the previous values of the target variables, and the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code time_series_settings_with_lags = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4 } automl_config_lags = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', blacklist_models=['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor'], iterations=10, iteration_timeout_minutes=10, X=X_train, y=y_train, n_cross_validations=3, verbosity=logging.INFO, **time_series_settings_with_lags) ###Output _____no_output_____ ###Markdown We now start a new local run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code local_run_lags = experiment.submit(automl_config_lags, show_output=True) best_run_lags, fitted_model_lags = local_run_lags.get_output() y_fcst_lags, X_trans_lags = fitted_model_lags.forecast(X_test, y_query) df_lags = align_outputs(y_fcst_lags, X_trans_lags, X_test, y_test) df_lags.head() X_trans_lags print("Forecasting model with lags") rmse = np.sqrt(mean_squared_error(df_lags[target_column_name], df_lags['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_lags[target_column_name], df_lags['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_lags[target_column_name], df_lags['predicted'])) # Plot outputs %matplotlib inline pred, = plt.plot(df_lags[time_column_name], df_lags['predicted'], color='b') actual, = plt.plot(df_lags[time_column_name], df_lags[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown What features matter for the forecast?The following steps will allow you to compute and visualize engineered feature importance based on your test data for forecasting. Setup the model explanations for AutoML modelsThe *fitted_model* can generate the following which will be used for getting the engineered and raw feature explanations using *automl_setup_model_explanations*:-1. Featurized data from train samples/test samples 2. Gather engineered and raw feature name lists3. Find the classes in your labeled column in classification scenariosThe *automl_explainer_setup_obj* contains all the structures from above list. ###Code from azureml.train.automl.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train.copy(), X_test=X_test.copy(), y=y_train, task='forecasting') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the *MimicWrapper* from *azureml.explain.model* package. The *MimicWrapper* can be initialized with fields in *automl_explainer_setup_obj*, your workspace and a LightGBM model which acts as a surrogate model to explain the AutoML model (*fitted_model* here). The *MimicWrapper* also takes the *best_run* object where the raw and engineered explanations will be uploaded. ###Code from azureml.explain.model.mimic.models.lightgbm_model import LGBMExplainableModel from azureml.explain.model.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel, init_dataset=automl_explainer_setup_obj.X_transform, run=best_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map]) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe *explain()* method in *MimicWrapper* can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use *ExplanationDashboard* to view the dash board visualization of the feature importance values of the generated engineered features by AutoML featurizers. ###Code engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) from azureml.contrib.explain.model.visualize import ExplanationDashboard ExplanationDashboard(engineered_explanations, automl_explainer_setup_obj.automl_estimator, automl_explainer_setup_obj.X_test_transform) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe *explain()* method in *MimicWrapper* can be again called with the transformed test samples and setting *get_raw* to *True* to get the feature importance for the raw features. You can also use *ExplanationDashboard* to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform) print(raw_explanations.get_feature_importance_dict()) from azureml.contrib.explain.model.visualize import ExplanationDashboard ExplanationDashboard(raw_explanations, automl_explainer_setup_obj.automl_pipeline, automl_explainer_setup_obj.X_test_raw) ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)1. [Results](Results)Advanced Forecasting1. [Advanced Training](Advanced Training)1. [Advanced Results](Advanced Results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import AmlCompute from azureml.core.compute import ComputeTarget # Choose a name for your cluster. amlcompute_cluster_name = "aml-compute" found = False # Check if this compute target already exists in the workspace. cts = ws.compute_targets if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute': found = True print('Found existing compute target.') compute_target = cts[amlcompute_cluster_name] if not found: print('Creating a new compute target...') provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_DS12_V2", # for GPU, use "STANDARD_NC6" #vm_priority = 'lowpriority', # optional max_nodes = 6) # Create the cluster.\n", compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config) print('Checking cluster status...') # Can poll for a minimum number of nodes and for a specific timeout. # If no min_node_count is provided, it will use the scale settings for the cluster. compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20) # For a more detailed view of current AmlCompute status, use get_status(). ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe() ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 5), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl/azureml.train.automl.constants.supportedmodels.regression?view=azure-ml-py).||**experiment_timeout_minutes**|Maximum amount of time in minutes that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_minutes parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_minutes=20, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe() y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. There are two reasons for this.We need to pass the recent values of the target variable y, whereas the scikit-compatible predict function only takes the non-target variables 'test'. In our case, the test data immediately follows the training data, and we fill the target variable with NaN. The NaN serves as a question mark for the forecaster to fill with the actuals. Using the forecast function will produce forecasts using the shortest possible forecast horizon. The last time at which a definite (non-NaN) value is seen is the forecast origin - the last time when the value of the target is known.Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. ###Code # Replace ALL values in y by NaN. # The forecast origin will be at the beginning of the first forecast period. # (Which is the same time as the end of the last training period.) y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test, y_query) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced TrainingWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. experiment_timeout_minutes=20, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # Replace ALL values in y by NaN. # The forecast origin will be at the beginning of the first forecast period. # (Which is the same time as the end of the last training period.) y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test, y_query) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.25.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq='H' # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.24.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Data and Forecasting Configurations](data)1. [Train](train)1. [Generate and Evaluate the Forecast](forecast)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Generate the forecast and compute the out-of-sample accuracy metrics1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast with lagging features Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.33.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq='H' # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute. ###Code test_experiment = Experiment(ws, experiment_name + "_inference") ###Output _____no_output_____ ###Markdown Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. ###Code from run_forecast import run_remote_inference remote_run_infer = run_remote_inference(test_experiment=test_experiment, compute_target=compute_target, train_run=best_run, test_dataset=test, target_column_name=target_column_name) remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine remote_run_infer.download_file('outputs/predictions.csv', 'predictions.csv') ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals). ###Code # load forecast data frame fcst_df = pd.read_csv('predictions.csv', parse_dates=[time_column_name]) fcst_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_df[target_column_name], y_pred=fcst_df['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_df[target_column_name], fcst_df['predicted'], color='b') test_test = plt.scatter(fcst_df[target_column_name], fcst_df[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code test_experiment_advanced = Experiment(ws, experiment_name + "_inference_advanced") advanced_remote_run_infer = run_remote_inference(test_experiment=test_experiment_advanced, compute_target=compute_target, train_run=best_run_lags, test_dataset=test, target_column_name=target_column_name, inference_folder='./forecast_advanced') advanced_remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine advanced_remote_run_infer.download_file('outputs/predictions.csv', 'predictions_advanced.csv') fcst_adv_df = pd.read_csv('predictions_advanced.csv', parse_dates=[time_column_name]) fcst_adv_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_adv_df[target_column_name], y_pred=fcst_adv_df['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_adv_df[target_column_name], fcst_adv_df['predicted'], color='b') test_test = plt.scatter(fcst_adv_df[target_column_name], fcst_adv_df[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.7.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.19.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).| TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Data and Forecasting Configurations](data)1. [Train](train)1. [Generate and Evaluate the Forecast](forecast)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Generate the forecast and compute the out-of-sample accuracy metrics1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast with lagging features Setup ###Code import json import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.38.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = "automl-forecasting-energydemand" # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output["Subscription ID"] = ws.subscription_id output["Workspace"] = ws.name output["Resource Group"] = ws.resource_group output["Location"] = ws.location output["Run History Name"] = experiment_name pd.set_option("display.max_colwidth", -1) outputDf = pd.DataFrame(data=output, index=[""]) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print("Found existing cluster, use it.") except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration( vm_size="STANDARD_DS12_V2", max_nodes=6 ) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = "demand" time_column_name = "timeStamp" dataset = Dataset.Tabular.from_delimited_files( path="https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv" ).with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq="H", # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig( task="forecasting", primary_metric="normalized_root_mean_squared_error", blocked_models=["ExtremeRandomTrees", "AutoArima", "Prophet"], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters, ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Run detailsBelow we retrieve the best Run object from among all the runs in the experiment. ###Code best_run = remote_run.get_best_child() best_run ###Output _____no_output_____ ###Markdown FeaturizationWe can look at the engineered feature names generated in time-series featurization via. the JSON file named 'engineered_feature_names.json' under the run outputs. ###Code # Download the JSON file locally best_run.download_file("outputs/engineered_feature_names.json", "engineered_feature_names.json") with open("engineered_feature_names.json", "r") as f: records = json.load(f) records ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Download the featurization summary JSON file locally best_run.download_file("outputs/featurization_summary.json", "featurization_summary.json") # Render the JSON as a pandas DataFrame with open("featurization_summary.json", "r") as f: records = json.load(f) fs = pd.DataFrame.from_records(records) # View a summary of the featurization fs[["RawFeatureName", "TypeDetected", "Dropped", "EngineeredFeatureCount", "Transformations"]] ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute. ###Code test_experiment = Experiment(ws, experiment_name + "_inference") ###Output _____no_output_____ ###Markdown Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. ###Code from run_forecast import run_remote_inference remote_run_infer = run_remote_inference( test_experiment=test_experiment, compute_target=compute_target, train_run=best_run, test_dataset=test, target_column_name=target_column_name, ) remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine remote_run_infer.download_file("outputs/predictions.csv", "predictions.csv") ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals). ###Code # load forecast data frame fcst_df = pd.read_csv("predictions.csv", parse_dates=[time_column_name]) fcst_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_df[target_column_name], y_pred=fcst_df["predicted"], metrics=list(constants.Metric.SCALAR_REGRESSION_SET), ) print("[Test data scores]\n") for key, value in scores.items(): print("{}: {:.3f}".format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_df[target_column_name], fcst_df["predicted"], color="b") test_test = plt.scatter( fcst_df[target_column_name], fcst_df[target_column_name], color="g" ) plt.legend( (test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8 ) plt.show() ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4, ) automl_config = AutoMLConfig( task="forecasting", primary_metric="normalized_root_mean_squared_error", blocked_models=[ "ElasticNet", "ExtremeRandomTrees", "GradientBoosting", "XGBoostRegressor", "ExtremeRandomTrees", "AutoArima", "Prophet", ], # These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters, ) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Run details ###Code best_run_lags = remote_run.get_best_child() best_run_lags ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code test_experiment_advanced = Experiment(ws, experiment_name + "_inference_advanced") advanced_remote_run_infer = run_remote_inference( test_experiment=test_experiment_advanced, compute_target=compute_target, train_run=best_run_lags, test_dataset=test, target_column_name=target_column_name, inference_folder="./forecast_advanced", ) advanced_remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine advanced_remote_run_infer.download_file( "outputs/predictions.csv", "predictions_advanced.csv" ) fcst_adv_df = pd.read_csv("predictions_advanced.csv", parse_dates=[time_column_name]) fcst_adv_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_adv_df[target_column_name], y_pred=fcst_adv_df["predicted"], metrics=list(constants.Metric.SCALAR_REGRESSION_SET), ) print("[Test data scores]\n") for key, value in scores.items(): print("{}: {:.3f}".format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter( fcst_adv_df[target_column_name], fcst_adv_df["predicted"], color="b" ) test_test = plt.scatter( fcst_adv_df[target_column_name], fcst_adv_df[target_column_name], color="g" ) plt.legend( (test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8 ) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.20.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).| TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.31.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq='H' # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used to forecast a single time-series in the energy demand application area. Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.Notebook synopsis:1. Creating an Experiment in an existing Workspace2. Configuration and local run of AutoML for a simple time-series model3. View engineered features and prediction results4. Configuration and local run of AutoML for a time-series model with lag and rolling window features5. Estimate feature importance Setup ###Code import azureml.core import pandas as pd import numpy as np import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ###Output _____no_output_____ ###Markdown As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' # project folder project_folder = './sample_projects/automl-local-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataWe will use energy consumption data from New York City for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. Pandas CSV reader is used to read the file into memory. Special attention is given to the "timeStamp" column in the data since it contains text which should be parsed as datetime-type objects. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() ###Output _____no_output_____ ###Markdown We must now define the schema of this dataset. Every time-series must have a time column and a target. The target quantity is what will be eventually forecasted by a trained model. In this case, the target is the "demand" column. The other columns, "temp" and "precip," are implicitly designated as features. ###Code # Dataset schema time_column_name = 'timeStamp' target_column_name = 'demand' ###Output _____no_output_____ ###Markdown Forecast HorizonIn addition to the data schema, we must also specify the forecast horizon. A forecast horizon is a time span into the future (or just beyond the latest date in the training data) where forecasts of the target quantity are needed. Choosing a forecast horizon is application specific, but a rule-of-thumb is that **the horizon should be the time-frame where you need actionable decisions based on the forecast.** The horizon usually has a strong relationship with the frequency of the time-series data, that is, the sampling interval of the target quantity and the features. For instance, the NYC energy demand data has an hourly frequency. A decision that requires a demand forecast to the hour is unlikely to be made weeks or months in advance, particularly if we expect weather to be a strong determinant of demand. We may have fairly accurate meteorological forecasts of the hourly temperature and precipitation on a the time-scale of a day or two, however.Given the above discussion, we generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so the user should consider carefully how they set this value. If a long horizon forecast really is necessary, it may be good practice to aggregate the series to a coarser time scale. Forecast horizons in AutoML are given as integer multiples of the time-series frequency. In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown Split the data into train and test setsWe now split the data into a train and a test set so that we may evaluate model performance. We note that the tail of the dataset contains a large number of NA values in the target column, so we designate the test set as the 48 hour window ending on the latest date of known energy demand. ###Code # Find time point to split on latest_known_time = data[~pd.isnull(data[target_column_name])][time_column_name].max() split_time = latest_known_time - pd.Timedelta(hours=max_horizon) # Split into train/test sets X_train = data[data[time_column_name] <= split_time] X_test = data[(data[time_column_name] > split_time) & (data[time_column_name] <= latest_known_time)] # Move the target values into their own arrays y_train = X_train.pop(target_column_name).values y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown TrainWe now instantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. For forecasting tasks, we must provide extra configuration related to the time-series data schema and forecasting context. Here, only the name of the time column and the maximum forecast horizon are needed. Other settings are described below:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], targets values.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. ###Code time_series_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon } automl_config = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', iterations=10, iteration_timeout_minutes=5, X=X_train, y=y_train, n_cross_validations=3, path=project_folder, verbosity = logging.INFO, **time_series_settings) ###Output _____no_output_____ ###Markdown Submitting the configuration will start a new run in this experiment. For local runs, the execution is synchronous. Depending on the data and number of iterations, this can run for a while. Parameters controlling concurrency may speed up the process, depending on your hardware.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown View the engineered names for featurized dataBelow we display the engineered feature names generated for the featurized data using the time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelFor forecasting, we will use the `forecast` function instead of the `predict` function. There are two reasons for this.We need to pass the recent values of the target variable `y`, whereas the scikit-compatible `predict` function only takes the non-target variables `X`. In our case, the test data immediately follows the training data, and we fill the `y` variable with `NaN`. The `NaN` serves as a question mark for the forecaster to fill with the actuals. Using the forecast function will produce forecasts using the shortest possible forecast horizon. The last time at which a definite (non-NaN) value is seen is the _forecast origin_ - the last time when the value of the target is known. Using the `predict` method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. ###Code # Replace ALL values in y_pred by NaN. # The forecast origin will be at the beginning of the first forecast period # (which is the same time as the end of the last training period). y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_fcst, X_trans = fitted_model.forecast(X_test, y_query) # limit the evaluation to data where y_test has actuals def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'): """ Demonstrates how to get the output aligned to the inputs using pandas indexes. Helps understand what happened if the output's shape differs from the input shape, or if the data got re-sorted by time and grain during forecasting. Typical causes of misalignment are: * we predicted some periods that were missing in actuals -> drop from eval * model was asked to predict past max_horizon -> increase max horizon * data at start of X_test was needed for lags -> provide previous periods """ df_fcst = pd.DataFrame({predicted_column_name : y_predicted}) # y and X outputs are aligned by forecast() function contract df_fcst.index = X_trans.index # align original X_test to y_test X_test_full = X_test.copy() X_test_full[target_column_name] = y_test # X_test_full's does not include origin, so reset for merge df_fcst.reset_index(inplace=True) X_test_full = X_test_full.reset_index().drop(columns='index') together = df_fcst.merge(X_test_full, how='right') # drop rows where prediction or actuals are nan # happens because of missing actuals # or at edges of time due to lags/rolling windows clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)] return(clean) df_all = align_outputs(y_fcst, X_trans, X_test, y_test) df_all.head() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Calculate accuracy metricsFinally, we calculate some accuracy metrics for the forecast and plot the predictions vs. the actuals over the time range in the test set. ###Code def MAPE(actual, pred): """ Calculate mean absolute percentage error. Remove NA and values where actual is close to zero """ not_na = ~(np.isnan(actual) | np.isnan(pred)) not_zero = ~np.isclose(actual, 0.0) actual_safe = actual[not_na & not_zero] pred_safe = pred[not_na & not_zero] APE = 100*np.abs((actual_safe - pred_safe)/actual_safe) return np.mean(APE) print("Simple forecasting model") rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_all[target_column_name], df_all['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted'])) # Plot outputs %matplotlib inline pred, = plt.plot(df_all[time_column_name], df_all['predicted'], color='b') actual, = plt.plot(df_all[time_column_name], df_all[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.title('Prediction vs. Actual Time-Series') plt.show() ###Output _____no_output_____ ###Markdown The distribution looks a little heavy tailed: we underestimate the excursions of the extremes. A normal-quantile transform of the target might help, but let's first try using some past data with the lags and rolling window transforms. Using lags and rolling window features We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation.Now that we configured target lags, that is the previous values of the target variables, and the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features. ###Code time_series_settings_with_lags = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4 } automl_config_lags = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', blacklist_models=['ElasticNet'], iterations=10, iteration_timeout_minutes=10, X=X_train, y=y_train, n_cross_validations=3, path=project_folder, verbosity=logging.INFO, **time_series_settings_with_lags) ###Output _____no_output_____ ###Markdown We now start a new local run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code local_run_lags = experiment.submit(automl_config_lags, show_output=True) best_run_lags, fitted_model_lags = local_run_lags.get_output() y_fcst_lags, X_trans_lags = fitted_model_lags.forecast(X_test, y_query) df_lags = align_outputs(y_fcst_lags, X_trans_lags, X_test, y_test) df_lags.head() X_trans_lags print("Forecasting model with lags") rmse = np.sqrt(mean_squared_error(df_lags[target_column_name], df_lags['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_lags[target_column_name], df_lags['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_lags[target_column_name], df_lags['predicted'])) # Plot outputs %matplotlib inline pred, = plt.plot(df_lags[time_column_name], df_lags['predicted'], color='b') actual, = plt.plot(df_lags[time_column_name], df_lags[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown What features matter for the forecast? ###Code from azureml.train.automl.automlexplainer import explain_model # feature names are everything in the transformed data except the target features = X_trans_lags.columns[:-1] expl = explain_model(fitted_model_lags, X_train.copy(), X_test.copy(), features=features, best_run=best_run_lags, y_train=y_train) # unpack the tuple shap_values, expected_values, feat_overall_imp, feat_names, per_class_summary, per_class_imp = expl best_run_lags ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Data and Forecasting Configurations](data)1. [Train](train)1. [Generate and Evaluate the Forecast](forecast)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Generate the forecast and compute the out-of-sample accuracy metrics1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast with lagging features Setup ###Code import json import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This notebook is compatible with Azure ML SDK version 1.35.0 or later. ###Code print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = "automl-forecasting-energydemand" # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output["Subscription ID"] = ws.subscription_id output["Workspace"] = ws.name output["Resource Group"] = ws.resource_group output["Location"] = ws.location output["Run History Name"] = experiment_name output["SDK Version"] = azureml.core.VERSION pd.set_option("display.max_colwidth", None) outputDf = pd.DataFrame(data=output, index=[""]) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print("Found existing cluster, use it.") except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration( vm_size="STANDARD_DS12_V2", max_nodes=6 ) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = "demand" time_column_name = "timeStamp" dataset = Dataset.Tabular.from_delimited_files( path="https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv" ).with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq="H", # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig( task="forecasting", primary_metric="normalized_root_mean_squared_error", blocked_models=["ExtremeRandomTrees", "AutoArima", "Prophet"], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters, ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Run detailsBelow we retrieve the best Run object from among all the runs in the experiment. ###Code best_run = remote_run.get_best_child() best_run ###Output _____no_output_____ ###Markdown FeaturizationWe can look at the engineered feature names generated in time-series featurization via. the JSON file named 'engineered_feature_names.json' under the run outputs. ###Code # Download the JSON file locally best_run.download_file( "outputs/engineered_feature_names.json", "engineered_feature_names.json" ) with open("engineered_feature_names.json", "r") as f: records = json.load(f) records ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Download the featurization summary JSON file locally best_run.download_file( "outputs/featurization_summary.json", "featurization_summary.json" ) # Render the JSON as a pandas DataFrame with open("featurization_summary.json", "r") as f: records = json.load(f) fs = pd.DataFrame.from_records(records) # View a summary of the featurization fs[ [ "RawFeatureName", "TypeDetected", "Dropped", "EngineeredFeatureCount", "Transformations", ] ] ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute. ###Code test_experiment = Experiment(ws, experiment_name + "_inference") ###Output _____no_output_____ ###Markdown Retrieving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. ###Code from run_forecast import run_remote_inference remote_run_infer = run_remote_inference( test_experiment=test_experiment, compute_target=compute_target, train_run=best_run, test_dataset=test, target_column_name=target_column_name, ) remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine remote_run_infer.download_file("outputs/predictions.csv", "predictions.csv") ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals). ###Code # load forecast data frame fcst_df = pd.read_csv("predictions.csv", parse_dates=[time_column_name]) fcst_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_df[target_column_name], y_pred=fcst_df["predicted"], metrics=list(constants.Metric.SCALAR_REGRESSION_SET), ) print("[Test data scores]\n") for key, value in scores.items(): print("{}: {:.3f}".format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_df[target_column_name], fcst_df["predicted"], color="b") test_test = plt.scatter( fcst_df[target_column_name], fcst_df[target_column_name], color="g" ) plt.legend( (test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8 ) plt.show() ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4, ) automl_config = AutoMLConfig( task="forecasting", primary_metric="normalized_root_mean_squared_error", blocked_models=[ "ElasticNet", "ExtremeRandomTrees", "GradientBoosting", "XGBoostRegressor", "ExtremeRandomTrees", "AutoArima", "Prophet", ], # These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters, ) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Run details ###Code best_run_lags = remote_run.get_best_child() best_run_lags ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code test_experiment_advanced = Experiment(ws, experiment_name + "_inference_advanced") advanced_remote_run_infer = run_remote_inference( test_experiment=test_experiment_advanced, compute_target=compute_target, train_run=best_run_lags, test_dataset=test, target_column_name=target_column_name, inference_folder="./forecast_advanced", ) advanced_remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine advanced_remote_run_infer.download_file( "outputs/predictions.csv", "predictions_advanced.csv" ) fcst_adv_df = pd.read_csv("predictions_advanced.csv", parse_dates=[time_column_name]) fcst_adv_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_adv_df[target_column_name], y_pred=fcst_adv_df["predicted"], metrics=list(constants.Metric.SCALAR_REGRESSION_SET), ) print("[Test data scores]\n") for key, value in scores.items(): print("{}: {:.3f}".format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter( fcst_adv_df[target_column_name], fcst_adv_df["predicted"], color="b" ) test_test = plt.scatter( fcst_adv_df[target_column_name], fcst_adv_df[target_column_name], color="g" ) plt.legend( (test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8 ) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)1. [Results](Results)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import AmlCompute from azureml.core.compute import ComputeTarget # Choose a name for your cluster. amlcompute_cluster_name = "aml-compute" found = False # Check if this compute target already exists in the workspace. cts = ws.compute_targets if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute': found = True print('Found existing compute target.') compute_target = cts[amlcompute_cluster_name] if not found: print('Creating a new compute target...') provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_DS12_V2", # for GPU, use "STANDARD_NC6" #vm_priority = 'lowpriority', # optional max_nodes = 6) # Create the cluster.\n", compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config) print('Checking cluster status...') # Can poll for a minimum number of nodes and for a specific timeout. # If no min_node_count is provided, it will use the scale settings for the cluster. compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20) # For a more detailed view of current AmlCompute status, use get_status(). ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().sort_values(time_column_name).tail(5).reset_index(drop=True) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().head(5).reset_index(drop=True) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see notebook on [high frequency forecasting](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-high-frequency/automl-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_test_copy = y_test.copy().astype(np.float) y_test_copy.fill(np.nan) y_predictions, X_trans = fitted_model.forecast(X_test,y_test_copy) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_test_copy = y_test.copy().astype(np.float) y_test_copy.fill(np.nan) y_predictions, X_trans = fitted_model_lags.forecast(X_test,y_test_copy) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.28.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq='H' # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.16.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).| TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used for energy demand forecasting.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.In this notebook you would see1. Creating an Experiment in an existing Workspace2. Instantiating AutoMLConfig with new task type "forecasting" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: "time_column_name" 3. Training the Model using local compute4. Exploring the results5. Testing the fitted model Setup ###Code import azureml.core import pandas as pd import numpy as np import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ###Output _____no_output_____ ###Markdown As part of the setup you have already created a Workspace. For AutoML you would need to create an Experiment. An Experiment is a named object in a Workspace, which is used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' # project folder project_folder = './sample_projects/automl-local-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataRead energy demanding data from file, and preview data. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() ###Output _____no_output_____ ###Markdown Split the data to train and test ###Code train = data[data['timeStamp'] < '2017-02-01'] test = data[data['timeStamp'] >= '2017-02-01'] ###Output _____no_output_____ ###Markdown Prepare the test data, we will feed X_test to the fitted model and get prediction ###Code y_test = test.pop('demand').values X_test = test ###Output _____no_output_____ ###Markdown Split the train data to train and validUse one month's data as valid data ###Code X_train = train[train['timeStamp'] < '2017-01-01'] X_valid = train[train['timeStamp'] >= '2017-01-01'] y_train = X_train.pop('demand').values y_valid = X_valid.pop('demand').values print(X_train.shape) print(y_train.shape) print(X_valid.shape) print(y_valid.shape) ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], targets values.||**X_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, n_features]||**y_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, ], targets values.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. ###Code time_column_name = 'timeStamp' automl_settings = { "time_column_name": time_column_name, } automl_config = AutoMLConfig(task = 'forecasting', debug_log = 'automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', iterations = 10, iteration_timeout_minutes = 5, X = X_train, y = y_train, X_valid = X_valid, y_valid = y_valid, path=project_folder, verbosity = logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelPredict on training and test set, and calculate residual values. ###Code y_pred = fitted_model.predict(X_test) y_pred ###Output _____no_output_____ ###Markdown Use the Check Data Function to remove the nan values from y_test to avoid error when calculate metrics ###Code if len(y_test) != len(y_pred): raise ValueError( 'the true values and prediction values do not have equal length.') elif len(y_test) == 0: raise ValueError( 'y_true and y_pred are empty.') # if there is any non-numeric element in the y_true or y_pred, # the ValueError exception will be thrown. y_test_f = np.array(y_test).astype(float) y_pred_f = np.array(y_pred).astype(float) # remove entries both in y_true and y_pred where at least # one element in y_true or y_pred is missing y_test = y_test_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))] y_pred = y_pred_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))] ###Output _____no_output_____ ###Markdown Calculate metrics for the prediction ###Code print("[Test Data] \nRoot Mean squared error: %.2f" % np.sqrt(mean_squared_error(y_test, y_pred))) # Explained variance score: 1 is perfect prediction print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred)) print('R2 score: %.2f' % r2_score(y_test, y_pred)) # Plot outputs %matplotlib notebook test_pred = plt.scatter(y_test, y_pred, color='b') test_test = plt.scatter(y_test, y_test, color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.23.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.11.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)1. [Results](Results)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os if not pd.__version__ == '0.23.4': raise ValueError('azureml requires pandas <= 0.23') # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import sys sys.path.append(r'C:\Users\jp\Documents\GitHub\vault-private') import credentials ws = credentials.authenticate_AZR('gmail','testground') # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name # pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import AmlCompute from azureml.core.compute import ComputeTarget from azureml.core.compute_target import ComputeTargetException # %% setup compute (GPU & CPU) gpu_cluster_name = "gpu-cluster" cpu_cluster_name = "cpu-cluster" # GPU try: gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name) print("Found existing gpu cluster") except ComputeTargetException: print("Creating new gpu-cluster") compute_config = AmlCompute.provisioning_configuration(vm_size="STANDARD_NV6", # cpu=6, $1.202/hr (PSYG) min_nodes=0, max_nodes=4) gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config) gpu_cluster.wait_for_completion(show_output=True) # wait and show output # CPU try: cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name) print("Found existing cpu-cluster") except ComputeTargetException: print("Creating new cpu-cluster") compute_config = AmlCompute.provisioning_configuration(vm_size="STANDARD_D3_V2", # cpu=4, $0.224/hr (PSYG) min_nodes=0, max_nodes=6) cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config) cpu_cluster.wait_for_completion(show_output=True) # wait and show output # check if listed in ws cts = ws.compute_targets cts cts = ws.compute_targets print(cts) # attach compute (gpu / cpu / local) import pyautogui sys.path.append(r'C:\Users\jp\Documents\GitHub\jp-codes-python\autoML_py36') import jp_utils answer = pyautogui.prompt( text='Enter compute target (gpu, cpu, or local)', title='Compute target', default='cpu') compute_dict = {'gpu':'gpu-cluster', 'cpu':'cpu-cluster', 'local':'gpu-local'} target_name = jp_utils.generic_switch(compute_dict, answer) compute_target =cts[target_name] print(compute_target.name) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code # remote_run = experiment.submit(automl_config, show_output=True) # remote_run.wait_for_completion() run_list = experiment.get_runs() run_list=[x for x in run_list] run_list from azureml.train.automl.run import AutoMLRun remote_run = AutoMLRun(experiment = experiment, run_id = 'AutoML_3e437bc0-6259-4345-ba7b-cd435714100e') ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown See what's in the best_run ###Code best_run.get_file_names() best_run.get_metrics() best_run.download_file(name='predicted_true') ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values X_test2 = X_test.append(X_test) X_test2.reset_index(inplace=True) print(X_test2) ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see notebook on [high frequency forecasting](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-high-frequency/automl-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=True) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Data and Forecasting Configurations](data)1. [Train](train)1. [Generate and Evaluate the Forecast](forecast)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Generate the forecast and compute the out-of-sample accuracy metrics1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast with lagging features Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.37.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = "automl-forecasting-energydemand" # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output["Subscription ID"] = ws.subscription_id output["Workspace"] = ws.name output["Resource Group"] = ws.resource_group output["Location"] = ws.location output["Run History Name"] = experiment_name pd.set_option("display.max_colwidth", -1) outputDf = pd.DataFrame(data=output, index=[""]) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print("Found existing cluster, use it.") except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration( vm_size="STANDARD_DS12_V2", max_nodes=6 ) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = "demand" time_column_name = "timeStamp" dataset = Dataset.Tabular.from_delimited_files( path="https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv" ).with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq="H", # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig( task="forecasting", primary_metric="normalized_root_mean_squared_error", blocked_models=["ExtremeRandomTrees", "AutoArima", "Prophet"], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters, ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps["timeseriestransformer"].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps[ "timeseriestransformer" ].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute. ###Code test_experiment = Experiment(ws, experiment_name + "_inference") ###Output _____no_output_____ ###Markdown Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. ###Code from run_forecast import run_remote_inference remote_run_infer = run_remote_inference( test_experiment=test_experiment, compute_target=compute_target, train_run=best_run, test_dataset=test, target_column_name=target_column_name, ) remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine remote_run_infer.download_file("outputs/predictions.csv", "predictions.csv") ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals). ###Code # load forecast data frame fcst_df = pd.read_csv("predictions.csv", parse_dates=[time_column_name]) fcst_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_df[target_column_name], y_pred=fcst_df["predicted"], metrics=list(constants.Metric.SCALAR_REGRESSION_SET), ) print("[Test data scores]\n") for key, value in scores.items(): print("{}: {:.3f}".format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_df[target_column_name], fcst_df["predicted"], color="b") test_test = plt.scatter( fcst_df[target_column_name], fcst_df[target_column_name], color="g" ) plt.legend( (test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8 ) plt.show() ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4, ) automl_config = AutoMLConfig( task="forecasting", primary_metric="normalized_root_mean_squared_error", blocked_models=[ "ElasticNet", "ExtremeRandomTrees", "GradientBoosting", "XGBoostRegressor", "ExtremeRandomTrees", "AutoArima", "Prophet", ], # These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters, ) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code test_experiment_advanced = Experiment(ws, experiment_name + "_inference_advanced") advanced_remote_run_infer = run_remote_inference( test_experiment=test_experiment_advanced, compute_target=compute_target, train_run=best_run_lags, test_dataset=test, target_column_name=target_column_name, inference_folder="./forecast_advanced", ) advanced_remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine advanced_remote_run_infer.download_file( "outputs/predictions.csv", "predictions_advanced.csv" ) fcst_adv_df = pd.read_csv("predictions_advanced.csv", parse_dates=[time_column_name]) fcst_adv_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_adv_df[target_column_name], y_pred=fcst_adv_df["predicted"], metrics=list(constants.Metric.SCALAR_REGRESSION_SET), ) print("[Test data scores]\n") for key, value in scores.items(): print("{}: {:.3f}".format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter( fcst_adv_df[target_column_name], fcst_adv_df["predicted"], color="b" ) test_test = plt.scatter( fcst_adv_df[target_column_name], fcst_adv_df[target_column_name], color="g" ) plt.legend( (test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8 ) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)1. [Results](Results)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import AmlCompute from azureml.core.compute import ComputeTarget # Choose a name for your cluster. amlcompute_cluster_name = "aml-compute" found = False # Check if this compute target already exists in the workspace. cts = ws.compute_targets if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute': found = True print('Found existing compute target.') compute_target = cts[amlcompute_cluster_name] if not found: print('Creating a new compute target...') provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_DS12_V2", # for GPU, use "STANDARD_NC6" #vm_priority = 'lowpriority', # optional max_nodes = 6) # Create the cluster.\n", compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config) print('Checking cluster status...') # Can poll for a minimum number of nodes and for a specific timeout. # If no min_node_count is provided, it will use the scale settings for the cluster. compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20) # For a more detailed view of current AmlCompute status, use get_status(). ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().sort_values(time_column_name).tail(5).reset_index(drop=True) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().head(5).reset_index(drop=True) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see notebook on [high frequency forecasting](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-high-frequency/automl-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.30.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq='H' # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.6.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.4.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see notebook on [high frequency forecasting](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-high-frequency/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.10.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used to forecast a single time-series in the energy demand application area. Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.Notebook synopsis:1. Creating an Experiment in an existing Workspace2. Configuration and local run of AutoML for a simple time-series model3. View engineered features and prediction results4. Configuration and local run of AutoML for a time-series model with lag and rolling window features5. Estimate feature importance Setup ###Code import azureml.core import pandas as pd import numpy as np import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ###Output _____no_output_____ ###Markdown As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' # project folder project_folder = './sample_projects/automl-local-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataWe will use energy consumption data from New York City for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. Pandas CSV reader is used to read the file into memory. Special attention is given to the "timeStamp" column in the data since it contains text which should be parsed as datetime-type objects. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() ###Output _____no_output_____ ###Markdown We must now define the schema of this dataset. Every time-series must have a time column and a target. The target quantity is what will be eventually forecasted by a trained model. In this case, the target is the "demand" column. The other columns, "temp" and "precip," are implicitly designated as features. ###Code # Dataset schema time_column_name = 'timeStamp' target_column_name = 'demand' ###Output _____no_output_____ ###Markdown Forecast HorizonIn addition to the data schema, we must also specify the forecast horizon. A forecast horizon is a time span into the future (or just beyond the latest date in the training data) where forecasts of the target quantity are needed. Choosing a forecast horizon is application specific, but a rule-of-thumb is that **the horizon should be the time-frame where you need actionable decisions based on the forecast.** The horizon usually has a strong relationship with the frequency of the time-series data, that is, the sampling interval of the target quantity and the features. For instance, the NYC energy demand data has an hourly frequency. A decision that requires a demand forecast to the hour is unlikely to be made weeks or months in advance, particularly if we expect weather to be a strong determinant of demand. We may have fairly accurate meteorological forecasts of the hourly temperature and precipitation on a the time-scale of a day or two, however.Given the above discussion, we generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so the user should consider carefully how they set this value. If a long horizon forecast really is necessary, it may be good practice to aggregate the series to a coarser time scale. Forecast horizons in AutoML are given as integer multiples of the time-series frequency. In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown Split the data into train and test setsWe now split the data into a train and a test set so that we may evaluate model performance. We note that the tail of the dataset contains a large number of NA values in the target column, so we designate the test set as the 48 hour window ending on the latest date of known energy demand. ###Code # Find time point to split on latest_known_time = data[~pd.isnull(data[target_column_name])][time_column_name].max() split_time = latest_known_time - pd.Timedelta(hours=max_horizon) # Split into train/test sets X_train = data[data[time_column_name] <= split_time] X_test = data[(data[time_column_name] > split_time) & (data[time_column_name] <= latest_known_time)] # Move the target values into their own arrays y_train = X_train.pop(target_column_name).values y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown TrainWe now instantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. For forecasting tasks, we must provide extra configuration related to the time-series data schema and forecasting context. Here, only the name of the time column and the maximum forecast horizon are needed. Other settings are described below:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], targets values.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. ###Code time_series_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon } automl_config = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', iterations=10, iteration_timeout_minutes=5, X=X_train, y=y_train, n_cross_validations=3, path=project_folder, verbosity = logging.INFO, **time_series_settings) ###Output _____no_output_____ ###Markdown Submitting the configuration will start a new run in this experiment. For local runs, the execution is synchronous. Depending on the data and number of iterations, this can run for a while. Parameters controlling concurrency may speed up the process, depending on your hardware.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown View the engineered names for featurized dataBelow we display the engineered feature names generated for the featurized data using the time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelFor forecasting, we will use the `forecast` function instead of the `predict` function. There are two reasons for this.We need to pass the recent values of the target variable `y`, whereas the scikit-compatible `predict` function only takes the non-target variables `X`. In our case, the test data immediately follows the training data, and we fill the `y` variable with `NaN`. The `NaN` serves as a question mark for the forecaster to fill with the actuals. Using the forecast function will produce forecasts using the shortest possible forecast horizon. The last time at which a definite (non-NaN) value is seen is the _forecast origin_ - the last time when the value of the target is known. Using the `predict` method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. ###Code # Replace ALL values in y_pred by NaN. # The forecast origin will be at the beginning of the first forecast period # (which is the same time as the end of the last training period). y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_fcst, X_trans = fitted_model.forecast(X_test, y_query) # limit the evaluation to data where y_test has actuals def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'): """ Demonstrates how to get the output aligned to the inputs using pandas indexes. Helps understand what happened if the output's shape differs from the input shape, or if the data got re-sorted by time and grain during forecasting. Typical causes of misalignment are: * we predicted some periods that were missing in actuals -> drop from eval * model was asked to predict past max_horizon -> increase max horizon * data at start of X_test was needed for lags -> provide previous periods """ df_fcst = pd.DataFrame({predicted_column_name : y_predicted}) # y and X outputs are aligned by forecast() function contract df_fcst.index = X_trans.index # align original X_test to y_test X_test_full = X_test.copy() X_test_full[target_column_name] = y_test # X_test_full's does not include origin, so reset for merge df_fcst.reset_index(inplace=True) X_test_full = X_test_full.reset_index().drop(columns='index') together = df_fcst.merge(X_test_full, how='right') # drop rows where prediction or actuals are nan # happens because of missing actuals # or at edges of time due to lags/rolling windows clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)] return(clean) df_all = align_outputs(y_fcst, X_trans, X_test, y_test) df_all.head() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Calculate accuracy metricsFinally, we calculate some accuracy metrics for the forecast and plot the predictions vs. the actuals over the time range in the test set. ###Code def MAPE(actual, pred): """ Calculate mean absolute percentage error. Remove NA and values where actual is close to zero """ not_na = ~(np.isnan(actual) | np.isnan(pred)) not_zero = ~np.isclose(actual, 0.0) actual_safe = actual[not_na & not_zero] pred_safe = pred[not_na & not_zero] APE = 100*np.abs((actual_safe - pred_safe)/actual_safe) return np.mean(APE) print("Simple forecasting model") rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_all[target_column_name], df_all['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted'])) # Plot outputs %matplotlib notebook pred, = plt.plot(df_all[time_column_name], df_all['predicted'], color='b') actual, = plt.plot(df_all[time_column_name], df_all[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.title('Prediction vs. Actual Time-Series') plt.show() ###Output _____no_output_____ ###Markdown The distribution looks a little heavy tailed: we underestimate the excursions of the extremes. A normal-quantile transform of the target might help, but let's first try using some past data with the lags and rolling window transforms. Using lags and rolling window features We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation.Now that we configured target lags, that is the previous values of the target variables, and the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features. ###Code time_series_settings_with_lags = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4 } automl_config_lags = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', blacklist_models=['ElasticNet'], iterations=10, iteration_timeout_minutes=10, X=X_train, y=y_train, n_cross_validations=3, path=project_folder, verbosity=logging.INFO, **time_series_settings_with_lags) ###Output _____no_output_____ ###Markdown We now start a new local run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code local_run_lags = experiment.submit(automl_config_lags, show_output=True) best_run_lags, fitted_model_lags = local_run_lags.get_output() y_fcst_lags, X_trans_lags = fitted_model_lags.forecast(X_test, y_query) df_lags = align_outputs(y_fcst_lags, X_trans_lags, X_test, y_test) df_lags.head() X_trans_lags print("Forecasting model with lags") rmse = np.sqrt(mean_squared_error(df_lags[target_column_name], df_lags['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_lags[target_column_name], df_lags['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_lags[target_column_name], df_lags['predicted'])) # Plot outputs %matplotlib notebook pred, = plt.plot(df_lags[time_column_name], df_lags['predicted'], color='b') actual, = plt.plot(df_lags[time_column_name], df_lags[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown What features matter for the forecast? ###Code from azureml.train.automl.automlexplainer import explain_model # feature names are everything in the transformed data except the target features = X_trans_lags.columns[:-1] expl = explain_model(fitted_model_lags, X_train.copy(), X_test.copy(), features=features, best_run=best_run_lags, y_train=y_train) # unpack the tuple shap_values, expected_values, feat_overall_imp, feat_names, per_class_summary, per_class_imp = expl best_run_lags ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)1. [Results](Results)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import AmlCompute from azureml.core.compute import ComputeTarget # Choose a name for your cluster. amlcompute_cluster_name = "aml-compute" found = False # Check if this compute target already exists in the workspace. cts = ws.compute_targets if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute': found = True print('Found existing compute target.') compute_target = cts[amlcompute_cluster_name] if not found: print('Creating a new compute target...') provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_DS12_V2", # for GPU, use "STANDARD_NC6" #vm_priority = 'lowpriority', # optional max_nodes = 6) # Create the cluster.\n", compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config) print('Checking cluster status...') # Can poll for a minimum number of nodes and for a specific timeout. # If no min_node_count is provided, it will use the scale settings for the cluster. compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20) # For a more detailed view of current AmlCompute status, use get_status(). ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see notebook on [high frequency forecasting](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-high-frequency/automl-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.5.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see notebook on [high frequency forecasting](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-high-frequency/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants, metrics from matplotlib import pyplot as plt # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants, metrics from matplotlib import pyplot as plt # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used to forecast a single time-series in the energy demand application area. Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.Notebook synopsis:1. Creating an Experiment in an existing Workspace2. Configuration and local run of AutoML for a simple time-series model3. View engineered features and prediction results4. Configuration and local run of AutoML for a time-series model with lag and rolling window features5. Estimate feature importance Setup ###Code import azureml.core import pandas as pd import numpy as np import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ###Output _____no_output_____ ###Markdown As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' # project folder project_folder = './sample_projects/automl-local-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataWe will use energy consumption data from New York City for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. Pandas CSV reader is used to read the file into memory. Special attention is given to the "timeStamp" column in the data since it contains text which should be parsed as datetime-type objects. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() ###Output _____no_output_____ ###Markdown We must now define the schema of this dataset. Every time-series must have a time column and a target. The target quantity is what will be eventually forecasted by a trained model. In this case, the target is the "demand" column. The other columns, "temp" and "precip," are implicitly designated as features. ###Code # Dataset schema time_column_name = 'timeStamp' target_column_name = 'demand' ###Output _____no_output_____ ###Markdown Forecast HorizonIn addition to the data schema, we must also specify the forecast horizon. A forecast horizon is a time span into the future (or just beyond the latest date in the training data) where forecasts of the target quantity are needed. Choosing a forecast horizon is application specific, but a rule-of-thumb is that **the horizon should be the time-frame where you need actionable decisions based on the forecast.** The horizon usually has a strong relationship with the frequency of the time-series data, that is, the sampling interval of the target quantity and the features. For instance, the NYC energy demand data has an hourly frequency. A decision that requires a demand forecast to the hour is unlikely to be made weeks or months in advance, particularly if we expect weather to be a strong determinant of demand. We may have fairly accurate meteorological forecasts of the hourly temperature and precipitation on a the time-scale of a day or two, however.Given the above discussion, we generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so the user should consider carefully how they set this value. If a long horizon forecast really is necessary, it may be good practice to aggregate the series to a coarser time scale. Forecast horizons in AutoML are given as integer multiples of the time-series frequency. In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown Split the data into train and test setsWe now split the data into a train and a test set so that we may evaluate model performance. We note that the tail of the dataset contains a large number of NA values in the target column, so we designate the test set as the 48 hour window ending on the latest date of known energy demand. ###Code # Find time point to split on latest_known_time = data[~pd.isnull(data[target_column_name])][time_column_name].max() split_time = latest_known_time - pd.Timedelta(hours=max_horizon) # Split into train/test sets X_train = data[data[time_column_name] <= split_time] X_test = data[(data[time_column_name] > split_time) & (data[time_column_name] <= latest_known_time)] # Move the target values into their own arrays y_train = X_train.pop(target_column_name).values y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown TrainWe now instantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. For forecasting tasks, we must provide extra configuration related to the time-series data schema and forecasting context. Here, only the name of the time column and the maximum forecast horizon are needed. Other settings are described below:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], targets values.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. ###Code time_series_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon } automl_config = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees'], iterations=10, iteration_timeout_minutes=5, X=X_train, y=y_train, n_cross_validations=3, path=project_folder, verbosity = logging.INFO, **time_series_settings) ###Output _____no_output_____ ###Markdown Submitting the configuration will start a new run in this experiment. For local runs, the execution is synchronous. Depending on the data and number of iterations, this can run for a while. Parameters controlling concurrency may speed up the process, depending on your hardware.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown View the engineered names for featurized dataBelow we display the engineered feature names generated for the featurized data using the time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelFor forecasting, we will use the `forecast` function instead of the `predict` function. There are two reasons for this.We need to pass the recent values of the target variable `y`, whereas the scikit-compatible `predict` function only takes the non-target variables `X`. In our case, the test data immediately follows the training data, and we fill the `y` variable with `NaN`. The `NaN` serves as a question mark for the forecaster to fill with the actuals. Using the forecast function will produce forecasts using the shortest possible forecast horizon. The last time at which a definite (non-NaN) value is seen is the _forecast origin_ - the last time when the value of the target is known. Using the `predict` method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. ###Code # Replace ALL values in y_pred by NaN. # The forecast origin will be at the beginning of the first forecast period # (which is the same time as the end of the last training period). y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_fcst, X_trans = fitted_model.forecast(X_test, y_query) # limit the evaluation to data where y_test has actuals def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'): """ Demonstrates how to get the output aligned to the inputs using pandas indexes. Helps understand what happened if the output's shape differs from the input shape, or if the data got re-sorted by time and grain during forecasting. Typical causes of misalignment are: * we predicted some periods that were missing in actuals -> drop from eval * model was asked to predict past max_horizon -> increase max horizon * data at start of X_test was needed for lags -> provide previous periods """ df_fcst = pd.DataFrame({predicted_column_name : y_predicted}) # y and X outputs are aligned by forecast() function contract df_fcst.index = X_trans.index # align original X_test to y_test X_test_full = X_test.copy() X_test_full[target_column_name] = y_test # X_test_full's does not include origin, so reset for merge df_fcst.reset_index(inplace=True) X_test_full = X_test_full.reset_index().drop(columns='index') together = df_fcst.merge(X_test_full, how='right') # drop rows where prediction or actuals are nan # happens because of missing actuals # or at edges of time due to lags/rolling windows clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)] return(clean) df_all = align_outputs(y_fcst, X_trans, X_test, y_test) df_all.head() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Calculate accuracy metricsFinally, we calculate some accuracy metrics for the forecast and plot the predictions vs. the actuals over the time range in the test set. ###Code def MAPE(actual, pred): """ Calculate mean absolute percentage error. Remove NA and values where actual is close to zero """ not_na = ~(np.isnan(actual) | np.isnan(pred)) not_zero = ~np.isclose(actual, 0.0) actual_safe = actual[not_na & not_zero] pred_safe = pred[not_na & not_zero] APE = 100*np.abs((actual_safe - pred_safe)/actual_safe) return np.mean(APE) print("Simple forecasting model") rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_all[target_column_name], df_all['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted'])) # Plot outputs %matplotlib inline pred, = plt.plot(df_all[time_column_name], df_all['predicted'], color='b') actual, = plt.plot(df_all[time_column_name], df_all[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.title('Prediction vs. Actual Time-Series') plt.show() ###Output _____no_output_____ ###Markdown The distribution looks a little heavy tailed: we underestimate the excursions of the extremes. A normal-quantile transform of the target might help, but let's first try using some past data with the lags and rolling window transforms. Using lags and rolling window features We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation.Now that we configured target lags, that is the previous values of the target variables, and the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features. ###Code time_series_settings_with_lags = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4 } automl_config_lags = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', blacklist_models=['ElasticNet','ExtremeRandomTrees','GradientBoosting'], iterations=10, iteration_timeout_minutes=10, X=X_train, y=y_train, n_cross_validations=3, path=project_folder, verbosity=logging.INFO, **time_series_settings_with_lags) ###Output _____no_output_____ ###Markdown We now start a new local run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code local_run_lags = experiment.submit(automl_config_lags, show_output=True) best_run_lags, fitted_model_lags = local_run_lags.get_output() y_fcst_lags, X_trans_lags = fitted_model_lags.forecast(X_test, y_query) df_lags = align_outputs(y_fcst_lags, X_trans_lags, X_test, y_test) df_lags.head() X_trans_lags print("Forecasting model with lags") rmse = np.sqrt(mean_squared_error(df_lags[target_column_name], df_lags['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_lags[target_column_name], df_lags['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_lags[target_column_name], df_lags['predicted'])) # Plot outputs %matplotlib inline pred, = plt.plot(df_lags[time_column_name], df_lags['predicted'], color='b') actual, = plt.plot(df_lags[time_column_name], df_lags[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown What features matter for the forecast? ###Code from azureml.train.automl.automlexplainer import explain_model # feature names are everything in the transformed data except the target features = X_trans_lags.columns[:-1] expl = explain_model(fitted_model_lags, X_train.copy(), X_test.copy(), features=features, best_run=best_run_lags, y_train=y_train) # unpack the tuple shap_values, expected_values, feat_overall_imp, feat_names, per_class_summary, per_class_imp = expl best_run_lags ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used for energy demand forecasting.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.In this notebook you would see1. Creating an Experiment in an existing Workspace2. Instantiating AutoMLConfig with new task type "forecasting" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: "time_column_name" 3. Training the Model using local compute4. Exploring the results5. Testing the fitted model SetupAs part of the setup you have already created a Workspace. For AutoML you would need to create an Experiment. An Experiment is a named object in a Workspace, which is used to run experiments. ###Code import azureml.core import pandas as pd import numpy as np import os import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from azureml.train.automl.run import AutoMLRun from matplotlib import pyplot as plt from matplotlib.pyplot import imshow from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' # project folder project_folder = './sample_projects/automl-local-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) pd.DataFrame(data=output, index=['']).T ###Output _____no_output_____ ###Markdown DataRead energy demanding data from file, and preview data. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() ###Output _____no_output_____ ###Markdown Split the data to train and test ###Code train = data[data['timeStamp'] < '2017-02-01'] test = data[data['timeStamp'] >= '2017-02-01'] ###Output _____no_output_____ ###Markdown Prepare the test data, we will feed X_test to the fitted model and get prediction ###Code y_test = test.pop('demand').values X_test = test ###Output _____no_output_____ ###Markdown Split the train data to train and validUse one month's data as valid data ###Code X_train = train[train['timeStamp'] < '2017-01-01'] X_valid = train[train['timeStamp'] >= '2017-01-01'] y_train = X_train.pop('demand').values y_valid = X_valid.pop('demand').values print(X_train.shape) print(y_train.shape) print(X_valid.shape) print(y_valid.shape) ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. ||**X_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, n_features]||**y_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. ||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. ###Code time_column_name = 'timeStamp' automl_settings = { "time_column_name": time_column_name, } automl_config = AutoMLConfig(task = 'forecasting', debug_log = 'automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', iterations = 10, iteration_timeout_minutes = 5, X = X_train, y = y_train, X_valid = X_valid, y_valid = y_valid, path=project_folder, verbosity = logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelPredict on training and test set, and calculate residual values. ###Code y_pred = fitted_model.predict(X_test) y_pred ###Output _____no_output_____ ###Markdown Define a Check Data FunctionRemove the nan values from y_test to avoid error when calculate metrics ###Code def _check_calc_input(y_true, y_pred, rm_na=True): """ Check that 'y_true' and 'y_pred' are non-empty and have equal length. :param y_true: Vector of actual values :type y_true: array-like :param y_pred: Vector of predicted values :type y_pred: array-like :param rm_na: If rm_na=True, remove entries where y_true=NA and y_pred=NA. :type rm_na: boolean :return: Tuple (y_true, y_pred). if rm_na=True, the returned vectors may differ from their input values. :rtype: Tuple with 2 entries """ if len(y_true) != len(y_pred): raise ValueError( 'the true values and prediction values do not have equal length.') elif len(y_true) == 0: raise ValueError( 'y_true and y_pred are empty.') # if there is any non-numeric element in the y_true or y_pred, # the ValueError exception will be thrown. y_true = np.array(y_true).astype(float) y_pred = np.array(y_pred).astype(float) if rm_na: # remove entries both in y_true and y_pred where at least # one element in y_true or y_pred is missing y_true_rm_na = y_true[~(np.isnan(y_true) | np.isnan(y_pred))] y_pred_rm_na = y_pred[~(np.isnan(y_true) | np.isnan(y_pred))] return (y_true_rm_na, y_pred_rm_na) else: return y_true, y_pred ###Output _____no_output_____ ###Markdown Use the Check Data Function to remove the nan values from y_test to avoid error when calculate metrics ###Code y_test,y_pred = _check_calc_input(y_test,y_pred) ###Output _____no_output_____ ###Markdown Calculate metrics for the prediction ###Code print("[Test Data] \nRoot Mean squared error: %.2f" % np.sqrt(mean_squared_error(y_test, y_pred))) # Explained variance score: 1 is perfect prediction print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred)) print('R2 score: %.2f' % r2_score(y_test, y_pred)) # Plot outputs %matplotlib notebook test_pred = plt.scatter(y_test, y_pred, color='b') test_test = plt.scatter(y_test, y_test, color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.21.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used for energy demand forecasting.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.In this notebook you would see1. Creating an Experiment in an existing Workspace2. Instantiating AutoMLConfig with new task type "forecasting" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: "time_column_name" 3. Training the Model using local compute4. Exploring the results5. Testing the fitted model Setup To use the *forecasting* task in AutoML, you need to have the **azuremlftk** package installed in your environment. The following cell tests whether this package is installed locally and, if not, gives you instructions for installing it. ###Code try: import ftk print('Using FTK version ' + ftk.__version__) except ImportError: print("Unable to import forecasting package. This notebook does not work without this package.\n" + "Please open a command prompt and run `pip install azuremlftk` to install the package. \n" + "Make sure you install the package into AutoML's Python environment.\n\n" + "For instance, if AutoML is installed in a conda environment called `python36`, run:\n" + "> activate python36\n> pip install azuremlftk") import azureml.core import pandas as pd import numpy as np import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ###Output _____no_output_____ ###Markdown As part of the setup you have already created a Workspace. For AutoML you would need to create an Experiment. An Experiment is a named object in a Workspace, which is used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' # project folder project_folder = './sample_projects/automl-local-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataRead energy demanding data from file, and preview data. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() ###Output _____no_output_____ ###Markdown Split the data to train and test ###Code train = data[data['timeStamp'] < '2017-02-01'] test = data[data['timeStamp'] >= '2017-02-01'] ###Output _____no_output_____ ###Markdown Prepare the test data, we will feed X_test to the fitted model and get prediction ###Code y_test = test.pop('demand').values X_test = test ###Output _____no_output_____ ###Markdown Split the train data to train and validUse one month's data as valid data ###Code X_train = train[train['timeStamp'] < '2017-01-01'] X_valid = train[train['timeStamp'] >= '2017-01-01'] y_train = X_train.pop('demand').values y_valid = X_valid.pop('demand').values print(X_train.shape) print(y_train.shape) print(X_valid.shape) print(y_valid.shape) ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. ||**X_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, n_features]||**y_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. ||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. ###Code time_column_name = 'timeStamp' automl_settings = { "time_column_name": time_column_name, } automl_config = AutoMLConfig(task = 'forecasting', debug_log = 'automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', iterations = 10, iteration_timeout_minutes = 5, X = X_train, y = y_train, X_valid = X_valid, y_valid = y_valid, path=project_folder, verbosity = logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelPredict on training and test set, and calculate residual values. ###Code y_pred = fitted_model.predict(X_test) y_pred ###Output _____no_output_____ ###Markdown Use the Check Data Function to remove the nan values from y_test to avoid error when calculate metrics ###Code if len(y_test) != len(y_pred): raise ValueError( 'the true values and prediction values do not have equal length.') elif len(y_test) == 0: raise ValueError( 'y_true and y_pred are empty.') # if there is any non-numeric element in the y_true or y_pred, # the ValueError exception will be thrown. y_test_f = np.array(y_test).astype(float) y_pred_f = np.array(y_pred).astype(float) # remove entries both in y_true and y_pred where at least # one element in y_true or y_pred is missing y_test = y_test_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))] y_pred = y_pred_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))] ###Output _____no_output_____ ###Markdown Calculate metrics for the prediction ###Code print("[Test Data] \nRoot Mean squared error: %.2f" % np.sqrt(mean_squared_error(y_test, y_pred))) # Explained variance score: 1 is perfect prediction print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred)) print('R2 score: %.2f' % r2_score(y_test, y_pred)) # Plot outputs %matplotlib notebook test_pred = plt.scatter(y_test, y_pred, color='b') test_test = plt.scatter(y_test, y_test, color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used for energy demand forecasting.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.In this notebook you would see1. Creating an Experiment in an existing Workspace2. Instantiating AutoMLConfig with new task type "forecasting" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: "time_column_name" 3. Training the Model using local compute4. Exploring the results5. Viewing the engineered names for featurized data and featurization summary for all raw features6. Testing the fitted model Setup ###Code import azureml.core import pandas as pd import numpy as np import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ###Output _____no_output_____ ###Markdown As part of the setup you have already created a Workspace. For AutoML you would need to create an Experiment. An Experiment is a named object in a Workspace, which is used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' # project folder project_folder = './sample_projects/automl-local-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataRead energy demanding data from file, and preview data. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() # let's take note of what columns means what in the data time_column_name = 'timeStamp' target_column_name = 'demand' ###Output _____no_output_____ ###Markdown Split the data into train and test sets ###Code X_train = data[data[time_column_name] < '2017-02-01'] X_test = data[data[time_column_name] >= '2017-02-01'] y_train = X_train.pop(target_column_name).values y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], targets values.||**n_cross_validations**|Number of cross validation splits.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. ###Code automl_settings = { "time_column_name": time_column_name } automl_config = AutoMLConfig(task = 'forecasting', debug_log = 'automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', iterations = 10, iteration_timeout_minutes = 5, X = X_train, y = y_train, n_cross_validations = 3, path=project_folder, verbosity = logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Submitting the configuration will start a new run in this experiment. For local runs, the execution is synchronous. Depending on the data and number of iterations, this can run for a while. Parameters controlling concurrency may speed up the process, depending on your hardware.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown View the engineered names for featurized dataBelow we display the engineered feature names generated for the featurized data using the time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelFor forecasting, we will use the `forecast` function instead of the `predict` function. There are two reasons for this.We need to pass the recent values of the target variable `y`, whereas the scikit-compatible `predict` function only takes the non-target variables `X`. In our case, the test data immediately follows the training data, and we fill the `y` variable with `NaN`. The `NaN` serves as a question mark for the forecaster to fill with the actuals. Using the forecast function will produce forecasts using the shortest possible forecast horizon. The last time at which a definite (non-NaN) value is seen is the _forecast origin_ - the last time when the value of the target is known. Using the `predict` method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. ###Code # Replace ALL values in y_pred by NaN. # The forecast origin will be at the beginning of the first forecast period # (which is the same time as the end of the last training period). y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_fcst, X_trans = fitted_model.forecast(X_test, y_query) # limit the evaluation to data where y_test has actuals def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'): """ Demonstrates how to get the output aligned to the inputs using pandas indexes. Helps understand what happened if the output's shape differs from the input shape, or if the data got re-sorted by time and grain during forecasting. Typical causes of misalignment are: * we predicted some periods that were missing in actuals -> drop from eval * model was asked to predict past max_horizon -> increase max horizon * data at start of X_test was needed for lags -> provide previous periods """ df_fcst = pd.DataFrame({predicted_column_name : y_predicted}) # y and X outputs are aligned by forecast() function contract df_fcst.index = X_trans.index # align original X_test to y_test X_test_full = X_test.copy() X_test_full[target_column_name] = y_test # X_test_full's does not include origin, so reset for merge df_fcst.reset_index(inplace=True) X_test_full = X_test_full.reset_index().drop(columns='index') together = df_fcst.merge(X_test_full, how='right') # drop rows where prediction or actuals are nan # happens because of missing actuals # or at edges of time due to lags/rolling windows clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)] return(clean) df_all = align_outputs(y_fcst, X_trans, X_test, y_test) df_all.head() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Calculate accuracy metrics ###Code def MAPE(actual, pred): """ Calculate mean absolute percentage error. Remove NA and values where actual is close to zero """ not_na = ~(np.isnan(actual) | np.isnan(pred)) not_zero = ~np.isclose(actual, 0.0) actual_safe = actual[not_na & not_zero] pred_safe = pred[not_na & not_zero] APE = 100*np.abs((actual_safe - pred_safe)/actual_safe) return np.mean(APE) print("Simple forecasting model") rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_all[target_column_name], df_all['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted'])) # Plot outputs %matplotlib notebook test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(y_test, y_test, color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown The distribution looks a little heavy tailed: we underestimate the excursions of the extremes. A normal-quantile transform of the target might help, but let's first try using some past data with the lags and rolling window transforms. Using lags and rolling window features to improve the forecast We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data.Now that we configured target lags, that is the previous values of the target variables, and the prediction is no longer horizon-less. We therefore must specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features. ###Code automl_settings_lags = { 'time_column_name': time_column_name, 'target_lags': 1, 'target_rolling_window_size': 5, # you MUST set the max_horizon when using lags and rolling windows # it is optional when looking-back features are not used 'max_horizon': len(y_test), # only one grain } automl_config_lags = AutoMLConfig(task = 'forecasting', debug_log = 'automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', iterations = 10, iteration_timeout_minutes = 5, X = X_train, y = y_train, n_cross_validations = 3, path=project_folder, verbosity = logging.INFO, **automl_settings_lags) local_run_lags = experiment.submit(automl_config_lags, show_output=True) best_run_lags, fitted_model_lags = local_run_lags.get_output() y_fcst_lags, X_trans_lags = fitted_model_lags.forecast(X_test, y_query) df_lags = align_outputs(y_fcst_lags, X_trans_lags, X_test, y_test) df_lags.head() X_trans_lags print("Forecasting model with lags") rmse = np.sqrt(mean_squared_error(df_lags[target_column_name], df_lags['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_lags[target_column_name], df_lags['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_lags[target_column_name], df_lags['predicted'])) # Plot outputs %matplotlib notebook test_pred = plt.scatter(df_lags[target_column_name], df_lags['predicted'], color='b') test_test = plt.scatter(y_test, y_test, color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown What features matter for the forecast? ###Code from azureml.train.automl.automlexplainer import explain_model # feature names are everything in the transformed data except the target features = X_trans.columns[:-1] expl = explain_model(fitted_model, X_train, X_test, features = features, best_run=best_run_lags, y_train = y_train) # unpack the tuple shap_values, expected_values, feat_overall_imp, feat_names, per_class_summary, per_class_imp = expl best_run_lags ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Data and Forecasting Configurations](data)1. [Train](train)1. [Generate and Evaluate the Forecast](forecast)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Generate the forecast and compute the out-of-sample accuracy metrics1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast with lagging features Setup ###Code import json import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.39.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = "automl-forecasting-energydemand" # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output["Subscription ID"] = ws.subscription_id output["Workspace"] = ws.name output["Resource Group"] = ws.resource_group output["Location"] = ws.location output["Run History Name"] = experiment_name pd.set_option("display.max_colwidth", -1) outputDf = pd.DataFrame(data=output, index=[""]) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print("Found existing cluster, use it.") except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration( vm_size="STANDARD_DS12_V2", max_nodes=6 ) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = "demand" time_column_name = "timeStamp" dataset = Dataset.Tabular.from_delimited_files( path="https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv" ).with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq="H", # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig( task="forecasting", primary_metric="normalized_root_mean_squared_error", blocked_models=["ExtremeRandomTrees", "AutoArima", "Prophet"], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters, ) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Run detailsBelow we retrieve the best Run object from among all the runs in the experiment. ###Code best_run = remote_run.get_best_child() best_run ###Output _____no_output_____ ###Markdown FeaturizationWe can look at the engineered feature names generated in time-series featurization via. the JSON file named 'engineered_feature_names.json' under the run outputs. ###Code # Download the JSON file locally best_run.download_file("outputs/engineered_feature_names.json", "engineered_feature_names.json") with open("engineered_feature_names.json", "r") as f: records = json.load(f) records ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Download the featurization summary JSON file locally best_run.download_file("outputs/featurization_summary.json", "featurization_summary.json") # Render the JSON as a pandas DataFrame with open("featurization_summary.json", "r") as f: records = json.load(f) fs = pd.DataFrame.from_records(records) # View a summary of the featurization fs[["RawFeatureName", "TypeDetected", "Dropped", "EngineeredFeatureCount", "Transformations"]] ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute. ###Code test_experiment = Experiment(ws, experiment_name + "_inference") ###Output _____no_output_____ ###Markdown Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. ###Code from run_forecast import run_remote_inference remote_run_infer = run_remote_inference( test_experiment=test_experiment, compute_target=compute_target, train_run=best_run, test_dataset=test, target_column_name=target_column_name, ) remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine remote_run_infer.download_file("outputs/predictions.csv", "predictions.csv") ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals). ###Code # load forecast data frame fcst_df = pd.read_csv("predictions.csv", parse_dates=[time_column_name]) fcst_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_df[target_column_name], y_pred=fcst_df["predicted"], metrics=list(constants.Metric.SCALAR_REGRESSION_SET), ) print("[Test data scores]\n") for key, value in scores.items(): print("{}: {:.3f}".format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_df[target_column_name], fcst_df["predicted"], color="b") test_test = plt.scatter( fcst_df[target_column_name], fcst_df[target_column_name], color="g" ) plt.legend( (test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8 ) plt.show() ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4, ) automl_config = AutoMLConfig( task="forecasting", primary_metric="normalized_root_mean_squared_error", blocked_models=[ "ElasticNet", "ExtremeRandomTrees", "GradientBoosting", "XGBoostRegressor", "ExtremeRandomTrees", "AutoArima", "Prophet", ], # These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters, ) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Run details ###Code best_run_lags = remote_run.get_best_child() best_run_lags ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code test_experiment_advanced = Experiment(ws, experiment_name + "_inference_advanced") advanced_remote_run_infer = run_remote_inference( test_experiment=test_experiment_advanced, compute_target=compute_target, train_run=best_run_lags, test_dataset=test, target_column_name=target_column_name, inference_folder="./forecast_advanced", ) advanced_remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine advanced_remote_run_infer.download_file( "outputs/predictions.csv", "predictions_advanced.csv" ) fcst_adv_df = pd.read_csv("predictions_advanced.csv", parse_dates=[time_column_name]) fcst_adv_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_adv_df[target_column_name], y_pred=fcst_adv_df["predicted"], metrics=list(constants.Metric.SCALAR_REGRESSION_SET), ) print("[Test data scores]\n") for key, value in scores.items(): print("{}: {:.3f}".format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter( fcst_adv_df[target_column_name], fcst_adv_df["predicted"], color="b" ) test_test = plt.scatter( fcst_adv_df[target_column_name], fcst_adv_df[target_column_name], color="g" ) plt.legend( (test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8 ) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.12.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).| TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used for energy demand forecasting.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.In this notebook you would see1. Creating an Experiment in an existing Workspace2. Instantiating AutoMLConfig with new task type "forecasting" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: "time_column_name" 3. Training the Model using local compute4. Exploring the results5. Testing the fitted model SetupAs part of the setup you have already created a Workspace. For AutoML you would need to create an Experiment. An Experiment is a named object in a Workspace, which is used to run experiments. ###Code import azureml.core import pandas as pd import numpy as np import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' # project folder project_folder = './sample_projects/automl-local-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataRead energy demanding data from file, and preview data. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() ###Output _____no_output_____ ###Markdown Split the data to train and test ###Code train = data[data['timeStamp'] < '2017-02-01'] test = data[data['timeStamp'] >= '2017-02-01'] ###Output _____no_output_____ ###Markdown Prepare the test data, we will feed X_test to the fitted model and get prediction ###Code y_test = test.pop('demand').values X_test = test ###Output _____no_output_____ ###Markdown Split the train data to train and validUse one month's data as valid data ###Code X_train = train[train['timeStamp'] < '2017-01-01'] X_valid = train[train['timeStamp'] >= '2017-01-01'] y_train = X_train.pop('demand').values y_valid = X_valid.pop('demand').values print(X_train.shape) print(y_train.shape) print(X_valid.shape) print(y_valid.shape) ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. ||**X_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, n_features]||**y_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. ||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. ###Code time_column_name = 'timeStamp' automl_settings = { "time_column_name": time_column_name, } automl_config = AutoMLConfig(task = 'forecasting', debug_log = 'automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', iterations = 10, iteration_timeout_minutes = 5, X = X_train, y = y_train, X_valid = X_valid, y_valid = y_valid, path=project_folder, verbosity = logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelPredict on training and test set, and calculate residual values. ###Code y_pred = fitted_model.predict(X_test) y_pred ###Output _____no_output_____ ###Markdown Use the Check Data Function to remove the nan values from y_test to avoid error when calculate metrics ###Code if len(y_test) != len(y_pred): raise ValueError( 'the true values and prediction values do not have equal length.') elif len(y_test) == 0: raise ValueError( 'y_true and y_pred are empty.') # if there is any non-numeric element in the y_true or y_pred, # the ValueError exception will be thrown. y_test_f = np.array(y_test).astype(float) y_pred_f = np.array(y_pred).astype(float) # remove entries both in y_true and y_pred where at least # one element in y_true or y_pred is missing y_test = y_test_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))] y_pred = y_pred_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))] ###Output _____no_output_____ ###Markdown Calculate metrics for the prediction ###Code print("[Test Data] \nRoot Mean squared error: %.2f" % np.sqrt(mean_squared_error(y_test, y_pred))) # Explained variance score: 1 is perfect prediction print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred)) print('R2 score: %.2f' % r2_score(y_test, y_pred)) # Plot outputs %matplotlib notebook test_pred = plt.scatter(y_test, y_pred, color='b') test_test = plt.scatter(y_test, y_test, color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.26.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq='H' # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used for energy demand forecasting.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.In this notebook you would see1. Creating an Experiment in an existing Workspace2. Instantiating AutoMLConfig with new task type "forecasting" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: "time_column_name" 3. Training the Model using local compute4. Exploring the results5. Viewing the engineered names for featurized data and featurization summary for all raw features6. Testing the fitted model Setup ###Code import azureml.core import pandas as pd import numpy as np import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ###Output _____no_output_____ ###Markdown As part of the setup you have already created a Workspace. For AutoML you would need to create an Experiment. An Experiment is a named object in a Workspace, which is used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' # project folder project_folder = './sample_projects/automl-local-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataRead energy demanding data from file, and preview data. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() # let's take note of what columns means what in the data time_column_name = 'timeStamp' target_column_name = 'demand' ###Output _____no_output_____ ###Markdown Split the data into train and test sets ###Code X_train = data[data[time_column_name] < '2017-02-01'] X_test = data[data[time_column_name] >= '2017-02-01'] y_train = X_train.pop(target_column_name).values y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], targets values.||**n_cross_validations**|Number of cross validation splits.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. ###Code automl_settings = { "time_column_name": time_column_name } automl_config = AutoMLConfig(task = 'forecasting', debug_log = 'automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', iterations = 10, iteration_timeout_minutes = 5, X = X_train, y = y_train, n_cross_validations = 3, path=project_folder, verbosity = logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Submitting the configuration will start a new run in this experiment. For local runs, the execution is synchronous. Depending on the data and number of iterations, this can run for a while. Parameters controlling concurrency may speed up the process, depending on your hardware.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown View the engineered names for featurized dataBelow we display the engineered feature names generated for the featurized data using the time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelFor forecasting, we will use the `forecast` function instead of the `predict` function. There are two reasons for this.We need to pass the recent values of the target variable `y`, whereas the scikit-compatible `predict` function only takes the non-target variables `X`. In our case, the test data immediately follows the training data, and we fill the `y` variable with `NaN`. The `NaN` serves as a question mark for the forecaster to fill with the actuals. Using the forecast function will produce forecasts using the shortest possible forecast horizon. The last time at which a definite (non-NaN) value is seen is the _forecast origin_ - the last time when the value of the target is known. Using the `predict` method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. ###Code # Replace ALL values in y_pred by NaN. # The forecast origin will be at the beginning of the first forecast period # (which is the same time as the end of the last training period). y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_fcst, X_trans = fitted_model.forecast(X_test, y_query) # limit the evaluation to data where y_test has actuals def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'): """ Demonstrates how to get the output aligned to the inputs using pandas indexes. Helps understand what happened if the output's shape differs from the input shape, or if the data got re-sorted by time and grain during forecasting. Typical causes of misalignment are: * we predicted some periods that were missing in actuals -> drop from eval * model was asked to predict past max_horizon -> increase max horizon * data at start of X_test was needed for lags -> provide previous periods """ df_fcst = pd.DataFrame({predicted_column_name : y_predicted}) # y and X outputs are aligned by forecast() function contract df_fcst.index = X_trans.index # align original X_test to y_test X_test_full = X_test.copy() X_test_full[target_column_name] = y_test # X_test_full's does not include origin, so reset for merge df_fcst.reset_index(inplace=True) X_test_full = X_test_full.reset_index().drop(columns='index') together = df_fcst.merge(X_test_full, how='right') # drop rows where prediction or actuals are nan # happens because of missing actuals # or at edges of time due to lags/rolling windows clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)] return(clean) df_all = align_outputs(y_fcst, X_trans, X_test, y_test) df_all.head() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Calculate accuracy metrics ###Code def MAPE(actual, pred): """ Calculate mean absolute percentage error. Remove NA and values where actual is close to zero """ not_na = ~(np.isnan(actual) | np.isnan(pred)) not_zero = ~np.isclose(actual, 0.0) actual_safe = actual[not_na & not_zero] pred_safe = pred[not_na & not_zero] APE = 100*np.abs((actual_safe - pred_safe)/actual_safe) return np.mean(APE) print("Simple forecasting model") rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_all[target_column_name], df_all['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted'])) # Plot outputs %matplotlib notebook test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(y_test, y_test, color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown The distribution looks a little heavy tailed: we underestimate the excursions of the extremes. A normal-quantile transform of the target might help, but let's first try using some past data with the lags and rolling window transforms. Using lags and rolling window features to improve the forecast We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data.Now that we configured target lags, that is the previous values of the target variables, and the prediction is no longer horizon-less. We therefore must specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features. ###Code automl_settings_lags = { 'time_column_name': time_column_name, 'target_lags': 1, 'target_rolling_window_size': 5, # you MUST set the max_horizon when using lags and rolling windows # it is optional when looking-back features are not used 'max_horizon': len(y_test), # only one grain } automl_config_lags = AutoMLConfig(task = 'forecasting', debug_log = 'automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', iterations = 10, iteration_timeout_minutes = 5, X = X_train, y = y_train, n_cross_validations = 3, path=project_folder, verbosity = logging.INFO, **automl_settings_lags) local_run_lags = experiment.submit(automl_config_lags, show_output=True) best_run_lags, fitted_model_lags = local_run_lags.get_output() y_fcst_lags, X_trans_lags = fitted_model_lags.forecast(X_test, y_query) df_lags = align_outputs(y_fcst_lags, X_trans_lags, X_test, y_test) df_lags.head() X_trans_lags print("Forecasting model with lags") rmse = np.sqrt(mean_squared_error(df_lags[target_column_name], df_lags['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_lags[target_column_name], df_lags['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_lags[target_column_name], df_lags['predicted'])) # Plot outputs %matplotlib notebook test_pred = plt.scatter(df_lags[target_column_name], df_lags['predicted'], color='b') test_test = plt.scatter(y_test, y_test, color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown What features matter for the forecast? ###Code from azureml.train.automl.automlexplainer import explain_model # feature names are everything in the transformed data except the target features = X_trans.columns[:-1] expl = explain_model(fitted_model, X_train, X_test, features = features, best_run=best_run_lags, y_train = y_train) # unpack the tuple shap_values, expected_values, feat_overall_imp, feat_names, per_class_summary, per_class_imp = expl best_run_lags ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](introduction)1. [Setup](setup)1. [Data and Forecasting Configurations](data)1. [Train](train)1. [Generate and Evaluate the Forecast](forecast)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Generate the forecast and compute the out-of-sample accuracy metrics1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast with lagging features Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.34.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, freq='H' # Set the forecast frequency to be hourly ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.The inference will run on a remote compute. In this example, it will re-use the training compute. ###Code test_experiment = Experiment(ws, experiment_name + "_inference") ###Output _____no_output_____ ###Markdown Retreiving forecasts from the modelWe have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. ###Code from run_forecast import run_remote_inference remote_run_infer = run_remote_inference(test_experiment=test_experiment, compute_target=compute_target, train_run=best_run, test_dataset=test, target_column_name=target_column_name) remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine remote_run_infer.download_file('outputs/predictions.csv', 'predictions.csv') ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals). ###Code # load forecast data frame fcst_df = pd.read_csv('predictions.csv', parse_dates=[time_column_name]) fcst_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_df[target_column_name], y_pred=fcst_df['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_df[target_column_name], fcst_df['predicted'], color='b') test_test = plt.scatter(fcst_df[target_column_name], fcst_df[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code test_experiment_advanced = Experiment(ws, experiment_name + "_inference_advanced") advanced_remote_run_infer = run_remote_inference(test_experiment=test_experiment_advanced, compute_target=compute_target, train_run=best_run_lags, test_dataset=test, target_column_name=target_column_name, inference_folder='./forecast_advanced') advanced_remote_run_infer.wait_for_completion(show_output=False) # download the inference output file to the local machine advanced_remote_run_infer.download_file('outputs/predictions.csv', 'predictions_advanced.csv') fcst_adv_df = pd.read_csv('predictions_advanced.csv', parse_dates=[time_column_name]) fcst_adv_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=fcst_adv_df[target_column_name], y_pred=fcst_adv_df['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_adv_df[target_column_name], fcst_adv_df['predicted'], color='b') test_test = plt.scatter(fcst_adv_df[target_column_name], fcst_adv_df[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)1. [Results](Results)Advanced Forecasting1. [Advanced Training](Advanced Training)1. [Advanced Results](Advanced Results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import AmlCompute from azureml.core.compute import ComputeTarget # Choose a name for your cluster. amlcompute_cluster_name = "aml-compute" found = False # Check if this compute target already exists in the workspace. cts = ws.compute_targets if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute': found = True print('Found existing compute target.') compute_target = cts[amlcompute_cluster_name] if not found: print('Creating a new compute target...') provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_DS12_V2", # for GPU, use "STANDARD_NC6" #vm_priority = 'lowpriority', # optional max_nodes = 6) # Create the cluster.\n", compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config) print('Checking cluster status...') # Can poll for a minimum number of nodes and for a specific timeout. # If no min_node_count is provided, it will use the scale settings for the cluster. compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20) # For a more detailed view of current AmlCompute status, use get_status(). ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe() ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 5), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl/azureml.train.automl.constants.supportedmodels.regression?view=azure-ml-py).||**experiment_timeout_minutes**|Maximum amount of time in minutes that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_minutes parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_minutes=20, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe() y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. There are two reasons for this.We need to pass the recent values of the target variable y, whereas the scikit-compatible predict function only takes the non-target variables 'test'. In our case, the test data immediately follows the training data, and we fill the target variable with NaN. The NaN serves as a question mark for the forecaster to fill with the actuals. Using the forecast function will produce forecasts using the shortest possible forecast horizon. The last time at which a definite (non-NaN) value is seen is the forecast origin - the last time when the value of the target is known.Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. ###Code # Replace ALL values in y by NaN. # The forecast origin will be at the beginning of the first forecast period. # (Which is the same time as the end of the last training period.) y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test, y_query) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced TrainingWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. experiment_timeout_minutes=20, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # Replace ALL values in y by NaN. # The forecast origin will be at the beginning of the first forecast period. # (Which is the same time as the end of the last training period.) y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test, y_query) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.8.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used to forecast a single time-series in the energy demand application area. Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.Notebook synopsis:1. Creating an Experiment in an existing Workspace2. Configuration and local run of AutoML for a simple time-series model3. View engineered features and prediction results4. Configuration and local run of AutoML for a time-series model with lag and rolling window features5. Estimate feature importance Setup ###Code import azureml.core import pandas as pd import numpy as np import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ###Output _____no_output_____ ###Markdown As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataWe will use energy consumption data from New York City for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. Pandas CSV reader is used to read the file into memory. Special attention is given to the "timeStamp" column in the data since it contains text which should be parsed as datetime-type objects. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() ###Output _____no_output_____ ###Markdown We must now define the schema of this dataset. Every time-series must have a time column and a target. The target quantity is what will be eventually forecasted by a trained model. In this case, the target is the "demand" column. The other columns, "temp" and "precip," are implicitly designated as features. ###Code # Dataset schema time_column_name = 'timeStamp' target_column_name = 'demand' ###Output _____no_output_____ ###Markdown Forecast HorizonIn addition to the data schema, we must also specify the forecast horizon. A forecast horizon is a time span into the future (or just beyond the latest date in the training data) where forecasts of the target quantity are needed. Choosing a forecast horizon is application specific, but a rule-of-thumb is that **the horizon should be the time-frame where you need actionable decisions based on the forecast.** The horizon usually has a strong relationship with the frequency of the time-series data, that is, the sampling interval of the target quantity and the features. For instance, the NYC energy demand data has an hourly frequency. A decision that requires a demand forecast to the hour is unlikely to be made weeks or months in advance, particularly if we expect weather to be a strong determinant of demand. We may have fairly accurate meteorological forecasts of the hourly temperature and precipitation on a the time-scale of a day or two, however.Given the above discussion, we generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so the user should consider carefully how they set this value. If a long horizon forecast really is necessary, it may be good practice to aggregate the series to a coarser time scale. Forecast horizons in AutoML are given as integer multiples of the time-series frequency. In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown Split the data into train and test setsWe now split the data into a train and a test set so that we may evaluate model performance. We note that the tail of the dataset contains a large number of NA values in the target column, so we designate the test set as the 48 hour window ending on the latest date of known energy demand. ###Code # Find time point to split on latest_known_time = data[~pd.isnull(data[target_column_name])][time_column_name].max() split_time = latest_known_time - pd.Timedelta(hours=max_horizon) # Split into train/test sets X_train = data[data[time_column_name] <= split_time] X_test = data[(data[time_column_name] > split_time) & (data[time_column_name] <= latest_known_time)] # Move the target values into their own arrays y_train = X_train.pop(target_column_name).values y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown TrainWe now instantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. For forecasting tasks, we must provide extra configuration related to the time-series data schema and forecasting context. Here, only the name of the time column and the maximum forecast horizon are needed. Other settings are described below:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], targets values.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.| ###Code time_series_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon } automl_config = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima'], iterations=10, iteration_timeout_minutes=5, X=X_train, y=y_train, n_cross_validations=3, verbosity = logging.INFO, **time_series_settings) ###Output _____no_output_____ ###Markdown Submitting the configuration will start a new run in this experiment. For local runs, the execution is synchronous. Depending on the data and number of iterations, this can run for a while. Parameters controlling concurrency may speed up the process, depending on your hardware.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown View the engineered names for featurized dataBelow we display the engineered feature names generated for the featurized data using the time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelFor forecasting, we will use the `forecast` function instead of the `predict` function. There are two reasons for this.We need to pass the recent values of the target variable `y`, whereas the scikit-compatible `predict` function only takes the non-target variables `X`. In our case, the test data immediately follows the training data, and we fill the `y` variable with `NaN`. The `NaN` serves as a question mark for the forecaster to fill with the actuals. Using the forecast function will produce forecasts using the shortest possible forecast horizon. The last time at which a definite (non-NaN) value is seen is the _forecast origin_ - the last time when the value of the target is known. Using the `predict` method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. ###Code # Replace ALL values in y_pred by NaN. # The forecast origin will be at the beginning of the first forecast period # (which is the same time as the end of the last training period). y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_fcst, X_trans = fitted_model.forecast(X_test, y_query) # limit the evaluation to data where y_test has actuals def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'): """ Demonstrates how to get the output aligned to the inputs using pandas indexes. Helps understand what happened if the output's shape differs from the input shape, or if the data got re-sorted by time and grain during forecasting. Typical causes of misalignment are: * we predicted some periods that were missing in actuals -> drop from eval * model was asked to predict past max_horizon -> increase max horizon * data at start of X_test was needed for lags -> provide previous periods """ df_fcst = pd.DataFrame({predicted_column_name : y_predicted}) # y and X outputs are aligned by forecast() function contract df_fcst.index = X_trans.index # align original X_test to y_test X_test_full = X_test.copy() X_test_full[target_column_name] = y_test # X_test_full's does not include origin, so reset for merge df_fcst.reset_index(inplace=True) X_test_full = X_test_full.reset_index().drop(columns='index') together = df_fcst.merge(X_test_full, how='right') # drop rows where prediction or actuals are nan # happens because of missing actuals # or at edges of time due to lags/rolling windows clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)] return(clean) df_all = align_outputs(y_fcst, X_trans, X_test, y_test) df_all.head() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Calculate accuracy metricsFinally, we calculate some accuracy metrics for the forecast and plot the predictions vs. the actuals over the time range in the test set. ###Code def MAPE(actual, pred): """ Calculate mean absolute percentage error. Remove NA and values where actual is close to zero """ not_na = ~(np.isnan(actual) | np.isnan(pred)) not_zero = ~np.isclose(actual, 0.0) actual_safe = actual[not_na & not_zero] pred_safe = pred[not_na & not_zero] APE = 100*np.abs((actual_safe - pred_safe)/actual_safe) return np.mean(APE) print("Simple forecasting model") rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_all[target_column_name], df_all['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted'])) # Plot outputs %matplotlib inline pred, = plt.plot(df_all[time_column_name], df_all['predicted'], color='b') actual, = plt.plot(df_all[time_column_name], df_all[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.title('Prediction vs. Actual Time-Series') plt.show() ###Output _____no_output_____ ###Markdown The distribution looks a little heavy tailed: we underestimate the excursions of the extremes. A normal-quantile transform of the target might help, but let's first try using some past data with the lags and rolling window transforms. Using lags and rolling window features We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation.Now that we configured target lags, that is the previous values of the target variables, and the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code time_series_settings_with_lags = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4 } automl_config_lags = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', blacklist_models=['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor'], iterations=10, iteration_timeout_minutes=10, X=X_train, y=y_train, n_cross_validations=3, verbosity=logging.INFO, **time_series_settings_with_lags) ###Output _____no_output_____ ###Markdown We now start a new local run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code local_run_lags = experiment.submit(automl_config_lags, show_output=True) best_run_lags, fitted_model_lags = local_run_lags.get_output() y_fcst_lags, X_trans_lags = fitted_model_lags.forecast(X_test, y_query) df_lags = align_outputs(y_fcst_lags, X_trans_lags, X_test, y_test) df_lags.head() X_trans_lags print("Forecasting model with lags") rmse = np.sqrt(mean_squared_error(df_lags[target_column_name], df_lags['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_lags[target_column_name], df_lags['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_lags[target_column_name], df_lags['predicted'])) # Plot outputs %matplotlib inline pred, = plt.plot(df_lags[time_column_name], df_lags['predicted'], color='b') actual, = plt.plot(df_lags[time_column_name], df_lags[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown What features matter for the forecast?The following steps will allow you to compute and visualize engineered feature importance based on your test data for forecasting. Setup the model explanations for AutoML modelsThe *fitted_model* can generate the following which will be used for getting the engineered and raw feature explanations using *automl_setup_model_explanations*:-1. Featurized data from train samples/test samples 2. Gather engineered and raw feature name lists3. Find the classes in your labeled column in classification scenariosThe *automl_explainer_setup_obj* contains all the structures from above list. ###Code from azureml.train.automl.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train.copy(), X_test=X_test.copy(), y=y_train, task='forecasting') ###Output _____no_output_____ ###Markdown Initialize the Mimic Explainer for feature importanceFor explaining the AutoML models, use the *MimicWrapper* from *azureml.explain.model* package. The *MimicWrapper* can be initialized with fields in *automl_explainer_setup_obj*, your workspace and a LightGBM model which acts as a surrogate model to explain the AutoML model (*fitted_model* here). The *MimicWrapper* also takes the *best_run* object where the raw and engineered explanations will be uploaded. ###Code from azureml.explain.model.mimic.models.lightgbm_model import LGBMExplainableModel from azureml.explain.model.mimic_wrapper import MimicWrapper explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel, init_dataset=automl_explainer_setup_obj.X_transform, run=best_run, features=automl_explainer_setup_obj.engineered_feature_names, feature_maps=[automl_explainer_setup_obj.feature_map]) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing engineered feature importanceThe *explain()* method in *MimicWrapper* can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use *ExplanationDashboard* to view the dash board visualization of the feature importance values of the generated engineered features by AutoML featurizers. ###Code engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform) print(engineered_explanations.get_feature_importance_dict()) from azureml.contrib.interpret.visualize import ExplanationDashboard ExplanationDashboard(engineered_explanations, automl_explainer_setup_obj.automl_estimator, automl_explainer_setup_obj.X_test_transform) ###Output _____no_output_____ ###Markdown Use Mimic Explainer for computing and visualizing raw feature importanceThe *explain()* method in *MimicWrapper* can be again called with the transformed test samples and setting *get_raw* to *True* to get the feature importance for the raw features. You can also use *ExplanationDashboard* to view the dash board visualization of the feature importance values of the raw features. ###Code raw_explanations = explainer.explain(['local', 'global'], get_raw=True, raw_feature_names=automl_explainer_setup_obj.raw_feature_names, eval_dataset=automl_explainer_setup_obj.X_test_transform) print(raw_explanations.get_feature_importance_dict()) from azureml.contrib.interpret.visualize import ExplanationDashboard ExplanationDashboard(raw_explanations, automl_explainer_setup_obj.automl_pipeline, automl_explainer_setup_obj.X_test_raw) ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used for energy demand forecasting.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.In this notebook you would see1. Creating an Experiment in an existing Workspace2. Instantiating AutoMLConfig with new task type "forecasting" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: "time_column_name" 3. Training the Model using local compute4. Exploring the results5. Testing the fitted model Setup ###Code import azureml.core import pandas as pd import numpy as np import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ###Output _____no_output_____ ###Markdown As part of the setup you have already created a Workspace. For AutoML you would need to create an Experiment. An Experiment is a named object in a Workspace, which is used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' # project folder project_folder = './sample_projects/automl-local-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataRead energy demanding data from file, and preview data. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() ###Output _____no_output_____ ###Markdown Split the data to train and test ###Code train = data[data['timeStamp'] < '2017-02-01'] test = data[data['timeStamp'] >= '2017-02-01'] ###Output _____no_output_____ ###Markdown Prepare the test data, we will feed X_test to the fitted model and get prediction ###Code y_test = test.pop('demand').values X_test = test ###Output _____no_output_____ ###Markdown Split the train data to train and validUse one month's data as valid data ###Code X_train = train[train['timeStamp'] < '2017-01-01'] X_valid = train[train['timeStamp'] >= '2017-01-01'] y_train = X_train.pop('demand').values y_valid = X_valid.pop('demand').values print(X_train.shape) print(y_train.shape) print(X_valid.shape) print(y_valid.shape) ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. ||**X_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, n_features]||**y_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. ||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. ###Code time_column_name = 'timeStamp' automl_settings = { "time_column_name": time_column_name, } automl_config = AutoMLConfig(task = 'forecasting', debug_log = 'automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', iterations = 10, iteration_timeout_minutes = 5, X = X_train, y = y_train, X_valid = X_valid, y_valid = y_valid, path=project_folder, verbosity = logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelPredict on training and test set, and calculate residual values. ###Code y_pred = fitted_model.predict(X_test) y_pred ###Output _____no_output_____ ###Markdown Use the Check Data Function to remove the nan values from y_test to avoid error when calculate metrics ###Code if len(y_test) != len(y_pred): raise ValueError( 'the true values and prediction values do not have equal length.') elif len(y_test) == 0: raise ValueError( 'y_true and y_pred are empty.') # if there is any non-numeric element in the y_true or y_pred, # the ValueError exception will be thrown. y_test_f = np.array(y_test).astype(float) y_pred_f = np.array(y_pred).astype(float) # remove entries both in y_true and y_pred where at least # one element in y_true or y_pred is missing y_test = y_test_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))] y_pred = y_pred_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))] ###Output _____no_output_____ ###Markdown Calculate metrics for the prediction ###Code print("[Test Data] \nRoot Mean squared error: %.2f" % np.sqrt(mean_squared_error(y_test, y_pred))) # Explained variance score: 1 is perfect prediction print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred)) print('R2 score: %.2f' % r2_score(y_test, y_pred)) # Plot outputs %matplotlib notebook test_pred = plt.scatter(y_test, y_pred, color='b') test_test = plt.scatter(y_test, y_test, color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.22.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code forecast_horizon = 48 ###Output _____no_output_____ ###Markdown Forecasting ParametersTo define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.|Property|Description||-|-||**time_column_name**|The name of your time column.||**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).||**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) for more information. TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**forecasting_parameters**|A class holds all the forecasting related parameters.| This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=forecasting_parameters) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlregressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-mlresiduals).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code advanced_forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=forecast_horizon, target_lags=12, target_rolling_window_size=4 ) automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, forecasting_parameters=advanced_forecasting_parameters) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used for energy demand forecasting.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.In this notebook you would see1. Creating an Experiment in an existing Workspace2. Instantiating AutoMLConfig with new task type "forecasting" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: "time_column_name" 3. Training the Model using local compute4. Exploring the results5. Viewing the engineered names for featurized data and featurization summary for all raw features6. Testing the fitted model Setup ###Code import azureml.core import pandas as pd import numpy as np import logging from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ###Output _____no_output_____ ###Markdown As part of the setup you have already created a Workspace. For AutoML you would need to create an Experiment. An Experiment is a named object in a Workspace, which is used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' # project folder project_folder = './sample_projects/automl-local-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataRead energy demanding data from file, and preview data. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() # let's take note of what columns means what in the data time_column_name = 'timeStamp' target_column_name = 'demand' ###Output _____no_output_____ ###Markdown Split the data into train and test sets ###Code X_train = data[data[time_column_name] < '2017-02-01'] X_test = data[data[time_column_name] >= '2017-02-01'] y_train = X_train.pop(target_column_name).values y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown TrainInstantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], targets values.||**n_cross_validations**|Number of cross validation splits.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. ###Code automl_settings = { "time_column_name": time_column_name } automl_config = AutoMLConfig(task = 'forecasting', debug_log = 'automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', iterations = 10, iteration_timeout_minutes = 5, X = X_train, y = y_train, n_cross_validations = 3, path=project_folder, verbosity = logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Submitting the configuration will start a new run in this experiment. For local runs, the execution is synchronous. Depending on the data and number of iterations, this can run for a while. Parameters controlling concurrency may speed up the process, depending on your hardware.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown View the engineered names for featurized dataBelow we display the engineered feature names generated for the featurized data using the time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelFor forecasting, we will use the `forecast` function instead of the `predict` function. There are two reasons for this.We need to pass the recent values of the target variable `y`, whereas the scikit-compatible `predict` function only takes the non-target variables `X`. In our case, the test data immediately follows the training data, and we fill the `y` variable with `NaN`. The `NaN` serves as a question mark for the forecaster to fill with the actuals. Using the forecast function will produce forecasts using the shortest possible forecast horizon. The last time at which a definite (non-NaN) value is seen is the _forecast origin_ - the last time when the value of the target is known. Using the `predict` method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. ###Code # Replace ALL values in y_pred by NaN. # The forecast origin will be at the beginning of the first forecast period # (which is the same time as the end of the last training period). y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_fcst, X_trans = fitted_model.forecast(X_test, y_query) # limit the evaluation to data where y_test has actuals def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'): """ Demonstrates how to get the output aligned to the inputs using pandas indexes. Helps understand what happened if the output's shape differs from the input shape, or if the data got re-sorted by time and grain during forecasting. Typical causes of misalignment are: * we predicted some periods that were missing in actuals -> drop from eval * model was asked to predict past max_horizon -> increase max horizon * data at start of X_test was needed for lags -> provide previous periods """ df_fcst = pd.DataFrame({predicted_column_name : y_predicted}) # y and X outputs are aligned by forecast() function contract df_fcst.index = X_trans.index # align original X_test to y_test X_test_full = X_test.copy() X_test_full[target_column_name] = y_test # X_test_full's does not include origin, so reset for merge df_fcst.reset_index(inplace=True) X_test_full = X_test_full.reset_index().drop(columns='index') together = df_fcst.merge(X_test_full, how='right') # drop rows where prediction or actuals are nan # happens because of missing actuals # or at edges of time due to lags/rolling windows clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)] return(clean) df_all = align_outputs(y_fcst, X_trans, X_test, y_test) df_all.head() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Calculate accuracy metrics ###Code def MAPE(actual, pred): """ Calculate mean absolute percentage error. Remove NA and values where actual is close to zero """ not_na = ~(np.isnan(actual) | np.isnan(pred)) not_zero = ~np.isclose(actual, 0.0) actual_safe = actual[not_na & not_zero] pred_safe = pred[not_na & not_zero] APE = 100*np.abs((actual_safe - pred_safe)/actual_safe) return np.mean(APE) print("Simple forecasting model") rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_all[target_column_name], df_all['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted'])) # Plot outputs %matplotlib notebook test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(y_test, y_test, color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown The distribution looks a little heavy tailed: we underestimate the excursions of the extremes. A normal-quantile transform of the target might help, but let's first try using some past data with the lags and rolling window transforms. Using lags and rolling window features to improve the forecast We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data.Now that we configured target lags, that is the previous values of the target variables, and the prediction is no longer horizon-less. We therefore must specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features. ###Code automl_settings_lags = { 'time_column_name': time_column_name, 'target_lags': 1, 'target_rolling_window_size': 5, # you MUST set the max_horizon when using lags and rolling windows # it is optional when looking-back features are not used 'max_horizon': len(y_test), # only one grain } automl_config_lags = AutoMLConfig(task = 'forecasting', debug_log = 'automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', iterations = 10, iteration_timeout_minutes = 5, X = X_train, y = y_train, n_cross_validations = 3, path=project_folder, verbosity = logging.INFO, **automl_settings_lags) local_run_lags = experiment.submit(automl_config_lags, show_output=True) best_run_lags, fitted_model_lags = local_run_lags.get_output() y_fcst_lags, X_trans_lags = fitted_model_lags.forecast(X_test, y_query) df_lags = align_outputs(y_fcst_lags, X_trans_lags, X_test, y_test) df_lags.head() X_trans_lags print("Forecasting model with lags") rmse = np.sqrt(mean_squared_error(df_lags[target_column_name], df_lags['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_lags[target_column_name], df_lags['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_lags[target_column_name], df_lags['predicted'])) # Plot outputs %matplotlib notebook test_pred = plt.scatter(df_lags[target_column_name], df_lags['predicted'], color='b') test_test = plt.scatter(y_test, y_test, color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown What features matter for the forecast? ###Code from azureml.train.automl.automlexplainer import explain_model # feature names are everything in the transformed data except the target features = X_trans.columns[:-1] expl = explain_model(fitted_model, X_train, X_test, features = features, best_run=best_run_lags, y_train = y_train) # unpack the tuple shap_values, expected_values, feat_overall_imp, feat_names, per_class_summary, per_class_imp = expl best_run_lags ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)1. [Results](Results)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import AmlCompute from azureml.core.compute import ComputeTarget # Choose a name for your cluster. amlcompute_cluster_name = "aml-compute" found = False # Check if this compute target already exists in the workspace. cts = ws.compute_targets if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute': found = True print('Found existing compute target.') compute_target = cts[amlcompute_cluster_name] if not found: print('Creating a new compute target...') provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_DS12_V2", # for GPU, use "STANDARD_NC6" #vm_priority = 'lowpriority', # optional max_nodes = 6) # Create the cluster.\n", compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config) print('Checking cluster status...') # Can poll for a minimum number of nodes and for a specific timeout. # If no min_node_count is provided, it will use the scale settings for the cluster. compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20) # For a more detailed view of current AmlCompute status, use get_status(). ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().sort_values(time_column_name).tail(5).reset_index(drop=True) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().head(5).reset_index(drop=True) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_minutes**|Maximum amount of time in minutes that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_minutes parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_minutes=20, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see notebook on [high frequency forecasting](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-high-frequency/automl-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. experiment_timeout_minutes=20, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Forecasting using the Energy Demand Dataset**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data and Forecasting Configurations](Data)1. [Train](Train)Advanced Forecasting1. [Advanced Training](advanced_training)1. [Advanced Results](advanced_results) IntroductionIn this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.In this notebook you will learn how to:1. Creating an Experiment using an existing Workspace1. Configure AutoML using 'AutoMLConfig'1. Train the model using AmlCompute1. Explore the engineered features and results1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features1. Run and explore the forecast Setup ###Code import logging from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from matplotlib import pyplot as plt import pandas as pd import numpy as np import warnings import os # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None import azureml.core from azureml.core import Experiment, Workspace, Dataset from azureml.train.automl import AutoMLConfig from datetime import datetime ###Output _____no_output_____ ###Markdown This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ###Code print("This notebook was created using version 1.9.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ###Output _____no_output_____ ###Markdown As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-forecasting-energydemand' # # project folder # project_folder = './sample_projects/automl-forecasting-energy-demand' experiment = Experiment(ws, experiment_name) output = {} output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Create or Attach existing AmlComputeA compute target is required to execute a remote Automated ML run. [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource. Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process.As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ###Code from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your cluster. amlcompute_cluster_name = "energy-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2', max_nodes=6) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown DataWe will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasetsdataset-types) to be used training and prediction. Let's set up what we know about the dataset.Target column is what we want to forecast.Time column is the time axis along which to predict.The other columns, "temp" and "precip", are implicitly designated as features. ###Code target_column_name = 'demand' time_column_name = 'timeStamp' dataset = Dataset.Tabular.from_delimited_files(path = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv").with_timestamp_columns(fine_grain_timestamp=time_column_name) dataset.take(5).to_pandas_dataframe().reset_index(drop=True) ###Output _____no_output_____ ###Markdown The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset. ###Code # Cut off the end of the dataset due to large number of nan values dataset = dataset.time_before(datetime(2017, 10, 10, 5)) ###Output _____no_output_____ ###Markdown Split the data into train and test sets The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing. ###Code # split into train based on time train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True) train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5) # split into test based on time test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5)) test.to_pandas_dataframe().reset_index(drop=True).head(5) ###Output _____no_output_____ ###Markdown Setting the maximum forecast horizonThe forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecastconfigure-and-run-experiment) guide.In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown TrainInstantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the name of the time column and the maximum forecast horizon.|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error||**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).||**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.||**training_data**|The training data to be used within the experiment.||**label_column_name**|The name of the label column.||**compute_target**|The remote compute for training.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.||**time_column_name**|The name of your time column.||**max_horizon**|The number of periods out you would like to predict past your training data. Periods are inferred from your data.| This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_hours parameter value to get results. ###Code automl_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **automl_settings) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.One may specify `show_output = True` to print currently running iterations to the console. ###Code remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best model from all the training iterations using get_output method. ###Code best_run, fitted_model = remote_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown FeaturizationYou can access the engineered feature names generated in time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown View featurization summaryYou can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:+ Raw feature name+ Number of engineered features formed out of this raw feature+ Type detected+ If feature was dropped+ List of feature transformations for the raw feature ###Code # Get the featurization summary as a list of JSON featurization_summary = fitted_model.named_steps['timeseriestransformer'].get_featurization_summary() # View the featurization summary as a pandas dataframe pd.DataFrame.from_records(featurization_summary) ###Output _____no_output_____ ###Markdown ForecastingNow that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set: ###Code X_test = test.to_pandas_dataframe().reset_index(drop=True) y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown Forecast FunctionFor forecasting, we will use the forecast function instead of the predict function. Using the predict method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. Forecast function also can handle more complicated scenarios, see the [forecast function notebook](../forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test) ###Output _____no_output_____ ###Markdown EvaluateTo evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows. ###Code from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Advanced Training We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. Using lags and rolling window featuresNow we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results. ###Code automl_advanced_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4, } automl_config = AutoMLConfig(task='forecasting', primary_metric='normalized_root_mean_squared_error', blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. experiment_timeout_hours=0.3, training_data=train, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping = True, n_cross_validations=3, verbosity=logging.INFO, **automl_advanced_settings) ###Output _____no_output_____ ###Markdown We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code advanced_remote_run = experiment.submit(automl_config, show_output=False) advanced_remote_run.wait_for_completion() ###Output _____no_output_____ ###Markdown Retrieve the Best Model ###Code best_run_lags, fitted_model_lags = advanced_remote_run.get_output() ###Output _____no_output_____ ###Markdown Advanced ResultsWe did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation. ###Code # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model_lags.forecast(X_test) from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl metrics module scores = scoring.score_regression( y_test=df_all[target_column_name], y_pred=df_all['predicted'], metrics=list(constants.Metric.SCALAR_REGRESSION_SET)) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png) Automated Machine Learning_**Energy Demand Forecasting**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) IntroductionIn this example, we show how AutoML can be used to forecast a single time-series in the energy demand application area. Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.Notebook synopsis:1. Creating an Experiment in an existing Workspace2. Configuration and local run of AutoML for a simple time-series model3. View engineered features and prediction results4. Configuration and local run of AutoML for a time-series model with lag and rolling window features5. Estimate feature importance Setup ###Code import azureml.core import pandas as pd import numpy as np import logging import warnings # Squash warning messages for cleaner output in the notebook warnings.showwarning = lambda *args, **kwargs: None from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from matplotlib import pyplot as plt from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score ###Output _____no_output_____ ###Markdown As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem. ###Code ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-energydemandforecasting' # project folder project_folder = './sample_projects/automl-local-energydemandforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataWe will use energy consumption data from New York City for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. Pandas CSV reader is used to read the file into memory. Special attention is given to the "timeStamp" column in the data since it contains text which should be parsed as datetime-type objects. ###Code data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp']) data.head() ###Output _____no_output_____ ###Markdown We must now define the schema of this dataset. Every time-series must have a time column and a target. The target quantity is what will be eventually forecasted by a trained model. In this case, the target is the "demand" column. The other columns, "temp" and "precip," are implicitly designated as features. ###Code # Dataset schema time_column_name = 'timeStamp' target_column_name = 'demand' ###Output _____no_output_____ ###Markdown Forecast HorizonIn addition to the data schema, we must also specify the forecast horizon. A forecast horizon is a time span into the future (or just beyond the latest date in the training data) where forecasts of the target quantity are needed. Choosing a forecast horizon is application specific, but a rule-of-thumb is that **the horizon should be the time-frame where you need actionable decisions based on the forecast.** The horizon usually has a strong relationship with the frequency of the time-series data, that is, the sampling interval of the target quantity and the features. For instance, the NYC energy demand data has an hourly frequency. A decision that requires a demand forecast to the hour is unlikely to be made weeks or months in advance, particularly if we expect weather to be a strong determinant of demand. We may have fairly accurate meteorological forecasts of the hourly temperature and precipitation on a the time-scale of a day or two, however.Given the above discussion, we generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so the user should consider carefully how they set this value. If a long horizon forecast really is necessary, it may be good practice to aggregate the series to a coarser time scale. Forecast horizons in AutoML are given as integer multiples of the time-series frequency. In this example, we set the horizon to 48 hours. ###Code max_horizon = 48 ###Output _____no_output_____ ###Markdown Split the data into train and test setsWe now split the data into a train and a test set so that we may evaluate model performance. We note that the tail of the dataset contains a large number of NA values in the target column, so we designate the test set as the 48 hour window ending on the latest date of known energy demand. ###Code # Find time point to split on latest_known_time = data[~pd.isnull(data[target_column_name])][time_column_name].max() split_time = latest_known_time - pd.Timedelta(hours=max_horizon) # Split into train/test sets X_train = data[data[time_column_name] <= split_time] X_test = data[(data[time_column_name] > split_time) & (data[time_column_name] <= latest_known_time)] # Move the target values into their own arrays y_train = X_train.pop(target_column_name).values y_test = X_test.pop(target_column_name).values ###Output _____no_output_____ ###Markdown TrainWe now instantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. For forecasting tasks, we must provide extra configuration related to the time-series data schema and forecasting context. Here, only the name of the time column and the maximum forecast horizon are needed. Other settings are described below:|Property|Description||-|-||**task**|forecasting||**primary_metric**|This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlationnormalized_root_mean_squared_errorr2_scorenormalized_mean_absolute_error|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], targets values.||**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. ###Code time_series_settings = { 'time_column_name': time_column_name, 'max_horizon': max_horizon } automl_config = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', iterations=10, iteration_timeout_minutes=5, X=X_train, y=y_train, n_cross_validations=3, path=project_folder, verbosity = logging.INFO, **time_series_settings) ###Output _____no_output_____ ###Markdown Submitting the configuration will start a new run in this experiment. For local runs, the execution is synchronous. Depending on the data and number of iterations, this can run for a while. Parameters controlling concurrency may speed up the process, depending on your hardware.You will see the currently running iterations printing to the console. ###Code local_run = experiment.submit(automl_config, show_output=True) local_run ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration. ###Code best_run, fitted_model = local_run.get_output() fitted_model.steps ###Output _____no_output_____ ###Markdown View the engineered names for featurized dataBelow we display the engineered feature names generated for the featurized data using the time-series featurization. ###Code fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names() ###Output _____no_output_____ ###Markdown Test the Best Fitted ModelFor forecasting, we will use the `forecast` function instead of the `predict` function. There are two reasons for this.We need to pass the recent values of the target variable `y`, whereas the scikit-compatible `predict` function only takes the non-target variables `X`. In our case, the test data immediately follows the training data, and we fill the `y` variable with `NaN`. The `NaN` serves as a question mark for the forecaster to fill with the actuals. Using the forecast function will produce forecasts using the shortest possible forecast horizon. The last time at which a definite (non-NaN) value is seen is the _forecast origin_ - the last time when the value of the target is known. Using the `predict` method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use. ###Code # Replace ALL values in y_pred by NaN. # The forecast origin will be at the beginning of the first forecast period # (which is the same time as the end of the last training period). y_query = y_test.copy().astype(np.float) y_query.fill(np.nan) # The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_fcst, X_trans = fitted_model.forecast(X_test, y_query) # limit the evaluation to data where y_test has actuals def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'): """ Demonstrates how to get the output aligned to the inputs using pandas indexes. Helps understand what happened if the output's shape differs from the input shape, or if the data got re-sorted by time and grain during forecasting. Typical causes of misalignment are: * we predicted some periods that were missing in actuals -> drop from eval * model was asked to predict past max_horizon -> increase max horizon * data at start of X_test was needed for lags -> provide previous periods """ df_fcst = pd.DataFrame({predicted_column_name : y_predicted}) # y and X outputs are aligned by forecast() function contract df_fcst.index = X_trans.index # align original X_test to y_test X_test_full = X_test.copy() X_test_full[target_column_name] = y_test # X_test_full's does not include origin, so reset for merge df_fcst.reset_index(inplace=True) X_test_full = X_test_full.reset_index().drop(columns='index') together = df_fcst.merge(X_test_full, how='right') # drop rows where prediction or actuals are nan # happens because of missing actuals # or at edges of time due to lags/rolling windows clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)] return(clean) df_all = align_outputs(y_fcst, X_trans, X_test, y_test) df_all.head() ###Output _____no_output_____ ###Markdown Looking at `X_trans` is also useful to see what featurization happened to the data. ###Code X_trans ###Output _____no_output_____ ###Markdown Calculate accuracy metricsFinally, we calculate some accuracy metrics for the forecast and plot the predictions vs. the actuals over the time range in the test set. ###Code def MAPE(actual, pred): """ Calculate mean absolute percentage error. Remove NA and values where actual is close to zero """ not_na = ~(np.isnan(actual) | np.isnan(pred)) not_zero = ~np.isclose(actual, 0.0) actual_safe = actual[not_na & not_zero] pred_safe = pred[not_na & not_zero] APE = 100*np.abs((actual_safe - pred_safe)/actual_safe) return np.mean(APE) print("Simple forecasting model") rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_all[target_column_name], df_all['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted'])) # Plot outputs %matplotlib notebook pred, = plt.plot(df_all[time_column_name], df_all['predicted'], color='b') actual, = plt.plot(df_all[time_column_name], df_all[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.title('Prediction vs. Actual Time-Series') plt.show() ###Output _____no_output_____ ###Markdown The distribution looks a little heavy tailed: we underestimate the excursions of the extremes. A normal-quantile transform of the target might help, but let's first try using some past data with the lags and rolling window transforms. Using lags and rolling window features We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation.Now that we configured target lags, that is the previous values of the target variables, and the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features. ###Code time_series_settings_with_lags = { 'time_column_name': time_column_name, 'max_horizon': max_horizon, 'target_lags': 12, 'target_rolling_window_size': 4 } automl_config_lags = AutoMLConfig(task='forecasting', debug_log='automl_nyc_energy_errors.log', primary_metric='normalized_root_mean_squared_error', blacklist_models=['ElasticNet'], iterations=10, iteration_timeout_minutes=10, X=X_train, y=y_train, n_cross_validations=3, path=project_folder, verbosity=logging.INFO, **time_series_settings_with_lags) ###Output _____no_output_____ ###Markdown We now start a new local run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations. ###Code local_run_lags = experiment.submit(automl_config_lags, show_output=True) best_run_lags, fitted_model_lags = local_run_lags.get_output() y_fcst_lags, X_trans_lags = fitted_model_lags.forecast(X_test, y_query) df_lags = align_outputs(y_fcst_lags, X_trans_lags, X_test, y_test) df_lags.head() X_trans_lags print("Forecasting model with lags") rmse = np.sqrt(mean_squared_error(df_lags[target_column_name], df_lags['predicted'])) print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) mae = mean_absolute_error(df_lags[target_column_name], df_lags['predicted']) print('mean_absolute_error score: %.2f' % mae) print('MAPE: %.2f' % MAPE(df_lags[target_column_name], df_lags['predicted'])) # Plot outputs %matplotlib notebook pred, = plt.plot(df_lags[time_column_name], df_lags['predicted'], color='b') actual, = plt.plot(df_lags[time_column_name], df_lags[target_column_name], color='g') plt.xticks(fontsize=8) plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show() ###Output _____no_output_____ ###Markdown What features matter for the forecast? ###Code from azureml.train.automl.automlexplainer import explain_model # feature names are everything in the transformed data except the target features = X_trans_lags.columns[:-1] expl = explain_model(fitted_model_lags, X_train.copy(), X_test.copy(), features=features, best_run=best_run_lags, y_train=y_train) # unpack the tuple shap_values, expected_values, feat_overall_imp, feat_names, per_class_summary, per_class_imp = expl best_run_lags ###Output _____no_output_____
ML-Python/Week3-4/submissions/A4/Mahir/A4_logistic2.ipynb
###Markdown Regularised Logistic Regression. ###Code import numpy as np #Importing required modules and libraries from math import * from numpy import linalg import matplotlib.pyplot as plt train_data = np.genfromtxt('ex2data2.txt',delimiter=',') print "Target Values:-" y = train_data[:,-1] print y X= train_data[:,0:2] m = len(y) X= np.insert(X,0,np.ones(m),axis = 1) alpha = 0.1 theta = np.random.random(3) def calcost(hypothesis,y,theta): loghyp = np.log(hypothesis) sum = (np.dot(y,loghyp)) + np.dot((1-y),np.log(1-hypothesis)) - 2*np.sum(theta[1:]**2) return -sum/m def sigmoid(z): return 1/(1 + np.exp(-z)) prod = np.dot(X,theta.transpose()) hypothesis = sigmoid(prod) oldcost = calcost(hypothesis,y,theta) diff = hypothesis - y for i in range(300): theta = theta*(1-(alpha*2)/m) - (alpha/m)*(np.sum(np.dot(diff,X))) theta[0] = theta[0] - (alpha/m)*(np.sum(np.dot(diff,X))) prod = np.dot(X,theta.transpose()) hypothesis = 1/(1 + np.exp(-prod)) diff = hypothesis - y newcost = calcost(hypothesis,y,theta) print newcost print "Values of theta:- ",theta prod = np.dot(X,theta.transpose()) predicted = sigmoid(prod) print "The predicted values are as follows:- " print predicted ###Output Values of theta:- [-0.19637577 0.34728744 0.08492231] The predicted values are as follows:- [ 0.47024442 0.45749851 0.4472453 0.42948285 0.41700563 0.41076469 0.4178289 0.42093645 0.44400998 0.45160057 0.47239614 0.48580625 0.50120165 0.51094393 0.50871861 0.49123253 0.4768137 0.46425133 0.45086318 0.42340826 0.41195006 0.40461726 0.40078586 0.40956438 0.43283519 0.45570463 0.48000082 0.49799527 0.45060801 0.44216923 0.44847319 0.45400739 0.44978754 0.43673218 0.4265109 0.42597146 0.42634367 0.44169474 0.45303549 0.46219603 0.47153548 0.48894162 0.50387527 0.49113058 0.50873633 0.50656372 0.52327526 0.48149692 0.50359676 0.49355992 0.47509319 0.45893766 0.45472652 0.44068988 0.43686839 0.41819799 0.43191979 0.43346173 0.48668191 0.4869233 0.49000856 0.51095931 0.51983166 0.51604285 0.52800498 0.53900966 0.52819512 0.53613015 0.53259377 0.52410262 0.52446404 0.51706191 0.51180686 0.50803096 0.48665078 0.48325656 0.48248045 0.46909654 0.45960832 0.44258069 0.45170313 0.430297 0.40768094 0.41228603 0.38306644 0.39290962 0.39317257 0.40601358 0.43129209 0.43768143 0.42657706 0.41906952 0.47643494 0.47538518 0.46298176 0.46744302 0.49893977 0.50285514 0.52523529 0.53638599 0.52296536 0.54588041 0.43491941 0.41756191 0.430499 0.40317395 0.40854631 0.42512926 0.39785657 0.39059236 0.39169397 0.38801752 0.38742672 0.40109925 0.41081434 0.4305129 0.4716016 0.50518381]
Notebooks/LSTM_text_gen_Dickens.ipynb
###Markdown Example script to generate text from Gutenberg.org texts Dr. Tirthajyoti Sarkar, Fremont, CAThis LSTM text-generation script was trained on Charles Dickens' "The Great Expectation" text from Gutenberg project.The link to the text file: http://www.gutenberg.org/files/1400/1400-0.txtIt is recommended to run this script on GPU, as recurrent networks are quite computationally intensive.If you try this script on new data, make sure your corpus has at least ~100k characters. ~1M is better. ###Code from __future__ import print_function from keras.callbacks import LambdaCallback from keras.models import Sequential from keras.layers import Dense, Dropout from keras.layers import LSTM from keras.optimizers import RMSprop from keras.utils.data_utils import get_file import numpy as np import random import sys import io ###Output _____no_output_____ ###Markdown Get the corpus from the internet (or some server or local disk) ###Code path = get_file( 'Great-expectations.txt', origin='http://www.gutenberg.org/files/1400/1400-0.txt') with io.open(path, encoding='utf-8') as f: text = f.read().lower() print('corpus length:', len(text)) ###Output Downloading data from http://www.gutenberg.org/files/1400/1400-0.txt 1056768/1049619 [==============================] - 0s 0us/step corpus length: 1013445 ###Markdown Process the text, create short sequences of chosen length ###Code chars = sorted(list(set(text))) print('total chars:', len(chars)) char_indices = dict((c, i) for i, c in enumerate(chars)) indices_char = dict((i, c) for i, c in enumerate(chars)) # cut the text in semi-redundant sequences of maxlen characters maxlen = 100 step = 5 sentences = [] next_chars = [] for i in range(0, len(text) - maxlen, step): sentences.append(text[i: i + maxlen]) next_chars.append(text[i + maxlen]) print('nb sequences:', len(sentences)) ###Output nb sequences: 202669 ###Markdown Vectorize the sequences for feeding to the neural network ###Code print('Vectorization...') x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool) y = np.zeros((len(sentences), len(chars)), dtype=np.bool) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): x[i, t, char_indices[char]] = 1 y[i, char_indices[next_chars[i]]] = 1 ###Output Vectorization... ###Markdown Build the LSTM model with two layers of 128 neurons eachWe are using `RMSprop` optimizer with a rather high learning rate of 0.01. Please play with these options as you see fit. ###Code print('Build model...') model = Sequential() model.add(LSTM(128, input_shape=(maxlen, len(chars)),return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(128)) model.add(Dense(len(chars), activation='softmax')) optimizer = RMSprop(lr=0.01) model.compile(loss='categorical_crossentropy', optimizer=optimizer) model.summary() ###Output _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm_20 (LSTM) (None, 100, 128) 97792 _________________________________________________________________ dropout_5 (Dropout) (None, 100, 128) 0 _________________________________________________________________ lstm_21 (LSTM) (None, 128) 131584 _________________________________________________________________ dense_10 (Dense) (None, 62) 7998 ================================================================= Total params: 237,374 Trainable params: 237,374 Non-trainable params: 0 _________________________________________________________________ ###Markdown Random sampling function for introducing diversity in the choice of the character for generating text ###Code def sample(preds, temperature=1.0): # helper function to sample an index from a probability array preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) ###Output _____no_output_____ ###Markdown Set the length of the text to be generated ###Code text_length = 200 ###Output _____no_output_____ ###Markdown An empty dictionary to store the generated text ###Code store = {} ###Output _____no_output_____ ###Markdown Callback function for `on_epoch_end` ###Code def on_epoch_end(epoch, _): # Function invoked at end of each epoch. Prints generated text. print() print('----- Generating text after Epoch: %d' % epoch) print('-'*100) start_index = random.randint(0, len(text) - maxlen - 1) for diversity in [0.5]: print('----- diversity:', diversity) generated = '' sentence = text[start_index: start_index + maxlen] generated += sentence print('----- Generating with seed: "' + sentence + '"') #sys.stdout.write(generated) vector=[] for i in range(text_length): x_pred = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(sentence): x_pred[0, t, char_indices[char]] = 1. preds = model.predict(x_pred, verbose=0)[0] next_index = sample(preds, diversity) next_char = indices_char[next_index] sentence = sentence[1:] + next_char #sys.stdout.write(next_char) vector.append(next_char) #sys.stdout.flush() print("-"*100) #print("SENTENCE:",sentence) #print("-"*100) print("GENERATED", ''.join(vector)) store['Epoch_'+str(epoch)]=''.join(vector) print() ###Output _____no_output_____ ###Markdown Setting the `LambdaCallback` ###Code print_callback = LambdaCallback(on_epoch_end=on_epoch_end) ###Output _____no_output_____ ###Markdown Choose the batch size and the number of epochsEach epoch training took ~ 9-10 minutes on my modest laptop with NVidia GTX 1060 Ti GPU (6 GB Video RAM), Core i-7 8770 CPU, 16 GB DDR4. ###Code batch_size = 128 epochs = 50 model.fit(x, y, batch_size=batch_size, epochs=epochs, callbacks=[print_callback]) ###Output Epoch 1/50 202669/202669 [==============================] - 543s 3ms/step - loss: 2.0212 ----- Generating text after Epoch: 0 ---------------------------------------------------------------------------------------------------- ----- diversity: 0.5 ----- Generating with seed: "me hour. next day i set myself to get the boat. it was soon done, and the boat was brought round to" ---------------------------------------------------------------------------------------------------- GENERATED my and betword and and no meren and of the bre and herbert have her of the dorning the toods of assice and and alwer have have soon in the want and look of a roon of her sece of the groom and and all Epoch 2/50 202669/202669 [==============================] - 539s 3ms/step - loss: 1.6671 ----- Generating text after Epoch: 1 ---------------------------------------------------------------------------------------------------- ----- diversity: 0.5 ----- Generating with seed: "er hair, and she had bridal flowers in her hair, but her hair was white. some bright jewels sparkled" ---------------------------------------------------------------------------------------------------- GENERATED to a boy, that i had be did and she was priding the sempled, and so this sain of the wears, that i was of path to hand the shrates, and but the sever to she would she had she was and some the was lon Epoch 3/50 202669/202669 [==============================] - 539s 3ms/step - loss: 1.5919 ----- Generating text after Epoch: 2 ---------------------------------------------------------------------------------------------------- ----- diversity: 0.5 ----- Generating with seed: "all in lively anticipation of “the two villains” being taken, and when the bellows seemed to roar fo" ---------------------------------------------------------------------------------------------------- GENERATED r the house. “the lost to be do be as so the looking ton't better at his shalling that he had the secase while brought looked that i don't seen all thing hand all these pain to be than i thought the Epoch 4/50 202669/202669 [==============================] - 539s 3ms/step - loss: 1.5522 ----- Generating text after Epoch: 3 ---------------------------------------------------------------------------------------------------- ----- diversity: 0.5 ----- Generating with seed: "ansaction before to-day. official sentiments are one thing. we are extra official.” i cordially ass" ---------------------------------------------------------------------------------------------------- GENERATED ible so again of the dismess in which i should have the shoulder, and he had like to herbert, on the night of the in the man with his chambers of the sumply as nothing a licks of my lope in that berer Epoch 5/50 202669/202669 [==============================] - 540s 3ms/step - loss: 1.5256 ----- Generating text after Epoch: 4 ---------------------------------------------------------------------------------------------------- ----- diversity: 0.5 ----- Generating with seed: "of that manly heart as he gave me his hand. “pip, dear old chap, life is made of ever so many parti" ---------------------------------------------------------------------------------------------------- GENERATED ng in the room, and the showor and in the simes of the shoulder of his stood to be the shart of biddy, and i was the distrecked in the him all reparatry his hands and wanting, and a expressing to the Epoch 6/50 202669/202669 [==============================] - 540s 3ms/step - loss: 1.5059 ----- Generating text after Epoch: 5 ---------------------------------------------------------------------------------------------------- ----- diversity: 0.5 ----- Generating with seed: " do assure you, pip,” he would often say, in explanation of that liberty; “i found her a tapping the" ###Markdown The loss vs. epochWe will see that loss suddenly goes up after 45 or so epochs. This is a stability problem with complex LSTM models. Model architecture and hyperparameter tuning is needed to continue improving the result. ###Code import matplotlib.pyplot as plt plt.figure(figsize=(7,5)) plt.plot(model.history.history['loss'],lw=3,c='k') plt.grid(True) plt.xticks(fontsize=15) plt.yticks(fontsize=15) plt.xlabel("Epochs",fontsize=15) plt.ylabel("Loss",fontsize=15) plt.show() ###Output _____no_output_____ ###Markdown Some example generated texts ###Code # The first 3 phrases for i in range(3): print() print("PHRASE NO. {}".format(i+1)) print("-"*60) print(store['Epoch_'+str(i)]) print("="*100) # Some in the middle for i in range(32,35): print() print("PHRASE NO. {}".format(i+1)) print("-"*60) print(store['Epoch_'+str(i)]) print("="*100) # Some at the end when loss went up too high for i in range(47,50): print() print("PHRASE NO. {}".format(i+1)) print("-"*60) print(store['Epoch_'+str(i)]) print("="*100) ###Output PHRASE NO. 48 ------------------------------------------------------------ 0:0010:100:101:100011111011:010::1:110101100110:0010::1011011101101111011111101:1:10100110110111011011110:101:1::010:1111111:1:0:0011101110111111011:0:0011:11010:0011:0110:1101:001:1:0:1:1110111:11011 ==================================================================================================== PHRASE NO. 49 ------------------------------------------------------------ we said in a though1isant 1111t111t 1)111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 ==================================================================================================== PHRASE NO. 50 ------------------------------------------------------------ ing 1ro11 11ter t1 the compet, and he w11tite11roog 1lesiders of the the shop1 it he there i h111t of11111:111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 ====================================================================================================
vgg_segmentation_keras/fcn16s_segmentation_keras2.ipynb
###Markdown Build model architecture Fully Convolutional Networks for Semantic Segmentation Jonathan Long, Evan Shelhamer, Trevor Darrellwww.cv-foundation.org/openaccess/content_cvpr_2015/papers/Long_Fully_Convolutional_Networks_2015_CVPR_paper.pdfExtract from the article relating to the model architecture.The model is derived from VGG16.**remark** : deconvolution and conv-transpose are synonyms, they perform up-sampling 4.1. From classifier to dense FCNWe decapitate each net by discarding the final classifier layer [**code comment** : *this is why fc8 is not included*], and convert all fully connected layers to convolutions.We append a 1x1 convolution with channel dimension 21 [**code comment** : *layer named score_fr*] to predict scores for each of the PASCAL classes (including background) at each of the coarse output locations, followed by a deconvolution layer to bilinearly upsample the coarse outputs to pixel-dense outputs as described in Section 3.3. 4.2. Combining what and whereWe define a new fully convolutional net (FCN) for segmentation that combines layers of the feature hierarchy andrefines the spatial precision of the output.While fully convolutionalized classifiers can be fine-tuned to segmentation as shown in 4.1, and even score highly on the standard metric, their output is dissatisfyingly coarse.The 32 pixel stride at the final prediction layer limits the scale of detail in the upsampled output.We address this by adding skips that combine the final prediction layer with lower layers with finer strides.This turns a line topology into a DAG [**code comment** : *this is why some latter stage layers have 2 inputs*], with edges that skip ahead from lower layers to higher ones.As they see fewer pixels, the finer scale predictions should need fewer layers, so it makes sense to make them from shallower net outputs.Combining fine layers and coarse layers lets the model make local predictions that respect global structure.We first divide the output stride in half by predicting from a 16 pixel stride layer.We add a 1x1 convolution layer on top of pool4 [**code comment** : *the score_pool4_filter layer*] to produce additional class predictions.We fuse this output with the predictions computed on top of conv7 (convolutionalized fc7) at stride 32 by adding a 2x upsampling layer and summing [**code comment** : *layer named sum*] both predictions [**code warning** : *requires first layer crop to insure the same size*].Finally, the stride 16 predictions are upsampled back to the image [**code comment** : *layer named upsample_new*].We call this net FCN-16s. Remark :**The original paper mention that FCN-8s (slightly more complex architecture) does not provide much improvement so we stopped at FCN-16s** ###Code image_size = 64*8 # INFO: initially tested with 256, 448, 512 fcn32model = fcn32_blank(image_size) #fcn32model.summary() # visual inspection of model architecture fcn16model = fcn_32s_to_16s(fcn32model) # INFO : dummy image array to test the model passes imarr = np.ones((image_size,image_size, 3)) imarr = np.expand_dims(imarr, axis=0) #testmdl = Model(fcn32model.input, fcn32model.layers[10].output) # works fine testmdl = fcn16model # works fine testmdl.predict(imarr).shape if (testmdl.predict(imarr).shape != (1, image_size, image_size, 21)): print('WARNING: size mismatch will impact some test cases') fcn16model.summary() # visual inspection of model architecture ###Output ____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== permute_1_input (InputLayer) (None, 512, 512, 3) 0 ____________________________________________________________________________________________________ permute_1 (Permute) (None, 512, 512, 3) 0 ____________________________________________________________________________________________________ conv1_1 (Conv2D) (None, 512, 512, 64) 1792 ____________________________________________________________________________________________________ conv1_2 (Conv2D) (None, 512, 512, 64) 36928 ____________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 256, 256, 64) 0 ____________________________________________________________________________________________________ conv2_1 (Conv2D) (None, 256, 256, 128) 73856 ____________________________________________________________________________________________________ conv2_2 (Conv2D) (None, 256, 256, 128) 147584 ____________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 128, 128, 128) 0 ____________________________________________________________________________________________________ conv3_1 (Conv2D) (None, 128, 128, 256) 295168 ____________________________________________________________________________________________________ conv3_2 (Conv2D) (None, 128, 128, 256) 590080 ____________________________________________________________________________________________________ conv3_3 (Conv2D) (None, 128, 128, 256) 590080 ____________________________________________________________________________________________________ max_pooling2d_3 (MaxPooling2D) (None, 64, 64, 256) 0 ____________________________________________________________________________________________________ conv4_1 (Conv2D) (None, 64, 64, 512) 1180160 ____________________________________________________________________________________________________ conv4_2 (Conv2D) (None, 64, 64, 512) 2359808 ____________________________________________________________________________________________________ conv4_3 (Conv2D) (None, 64, 64, 512) 2359808 ____________________________________________________________________________________________________ max_pooling2d_4 (MaxPooling2D) (None, 32, 32, 512) 0 ____________________________________________________________________________________________________ conv5_1 (Conv2D) (None, 32, 32, 512) 2359808 ____________________________________________________________________________________________________ conv5_2 (Conv2D) (None, 32, 32, 512) 2359808 ____________________________________________________________________________________________________ conv5_3 (Conv2D) (None, 32, 32, 512) 2359808 ____________________________________________________________________________________________________ max_pooling2d_5 (MaxPooling2D) (None, 16, 16, 512) 0 ____________________________________________________________________________________________________ fc6 (Conv2D) (None, 16, 16, 4096) 102764544 ____________________________________________________________________________________________________ fc7 (Conv2D) (None, 16, 16, 4096) 16781312 ____________________________________________________________________________________________________ score_fr (Conv2D) (None, 16, 16, 21) 86037 ____________________________________________________________________________________________________ score2 (Conv2DTranspose) (None, 34, 34, 21) 7077 ____________________________________________________________________________________________________ score_pool4 (Conv2D) (None, 32, 32, 21) 10773 ____________________________________________________________________________________________________ cropping2d_1 (Cropping2D) (None, 32, 32, 21) 0 ____________________________________________________________________________________________________ add_1 (Add) (None, 32, 32, 21) 0 ____________________________________________________________________________________________________ upsample_new (Conv2DTranspose) (None, 528, 528, 21) 451605 ____________________________________________________________________________________________________ cropping2d_2 (Cropping2D) (None, 512, 512, 21) 0 ==================================================================================================== Total params: 134,816,036 Trainable params: 134,816,036 Non-trainable params: 0 ____________________________________________________________________________________________________ ###Markdown Load VGG weigths from .mat file https://www.vlfeat.org/matconvnet/pretrained/semantic-segmentation Download from console with :wget https://www.vlfeat.org/matconvnet/models/pascal-fcn16s-dag.mat ###Code from scipy.io import loadmat data = loadmat('pascal-fcn16s-dag.mat', matlab_compatible=False, struct_as_record=False) l = data['layers'] p = data['params'] description = data['meta'][0,0].classes[0,0].description l.shape, p.shape, description.shape class2index = {} for i, clname in enumerate(description[0,:]): class2index[str(clname[0])] = i print(sorted(class2index.keys())) if False: # inspection of data structure print(dir(l[0,31].block[0,0])) print(dir(l[0,36].block[0,0])) for i in range(0, p.shape[1]-1, 2): print(i, str(p[0,i].name[0]), p[0,i].value.shape, str(p[0,i+1].name[0]), p[0,i+1].value.shape) for i in range(l.shape[1]): print(i, str(l[0,i].name[0]), str(l[0,i].type[0]), [str(n[0]) for n in l[0,i].inputs[0,:]], [str(n[0]) for n in l[0,i].outputs[0,:]]) # documentation for the dagnn.Crop layer : # https://github.com/vlfeat/matconvnet/blob/master/matlab/%2Bdagnn/Crop.m def copy_mat_to_keras(kmodel): kerasnames = [lr.name for lr in kmodel.layers] prmt = (0, 1, 2, 3) # WARNING : important setting as 2 of the 4 axis have same size dimension for i in range(0, p.shape[1]-1, 2): matname = '_'.join(p[0,i].name[0].split('_')[0:-1]) if matname in kerasnames: kindex = kerasnames.index(matname) print('found : ', (str(matname), kindex)) l_weights = p[0,i].value l_bias = p[0,i+1].value f_l_weights = l_weights.transpose(prmt) if False: # WARNING : this depends on "image_data_format":"channels_last" in keras.json file f_l_weights = np.flip(f_l_weights, 0) f_l_weights = np.flip(f_l_weights, 1) print(f_l_weights.shape, kmodel.layers[kindex].get_weights()[0].shape) assert (f_l_weights.shape == kmodel.layers[kindex].get_weights()[0].shape) assert (l_bias.shape[1] == 1) assert (l_bias[:,0].shape == kmodel.layers[kindex].get_weights()[1].shape) assert (len(kmodel.layers[kindex].get_weights()) == 2) kmodel.layers[kindex].set_weights([f_l_weights, l_bias[:,0]]) else: print('not found : ', str(matname)) #copy_mat_to_keras(fcn32model) copy_mat_to_keras(fcn16model) im = Image.open('rgb.jpg') # http://www.robots.ox.ac.uk/~szheng/crfasrnndemo/static/rgb.jpg im = im.crop((0,0,319,319)) # WARNING : manual square cropping im = im.resize((image_size,image_size)) plt.imshow(np.asarray(im)) print(np.asarray(im).shape) crpim = im # WARNING : we deal with cropping in a latter section, this image is already fit preds = prediction(fcn16model, crpim, transform=False) # WARNING : transfrom=True requires a code change (dim order) #imperson = preds[0,class2index['person'],:,:] print(preds.shape) imclass = np.argmax(preds, axis=3)[0,:,:] print(imclass.shape) plt.figure(figsize = (15, 7)) plt.subplot(1,3,1) plt.imshow( np.asarray(crpim) ) plt.subplot(1,3,2) plt.imshow( imclass ) plt.subplot(1,3,3) plt.imshow( np.asarray(crpim) ) masked_imclass = np.ma.masked_where(imclass == 0, imclass) #plt.imshow( imclass, alpha=0.5 ) plt.imshow( masked_imclass, alpha=0.5 ) # List of dominant classes found in the image for c in np.unique(imclass): print(c, str(description[0,c][0])) bspreds = bytescale(preds, low=0, high=255) plt.figure(figsize = (15, 7)) plt.subplot(2,3,1) plt.imshow(np.asarray(crpim)) plt.subplot(2,3,3+1) plt.imshow(bspreds[0,:,:,class2index['background']], cmap='seismic') plt.subplot(2,3,3+2) plt.imshow(bspreds[0,:,:,class2index['person']], cmap='seismic') plt.subplot(2,3,3+3) plt.imshow(bspreds[0,:,:,class2index['bicycle']], cmap='seismic') ###Output _____no_output_____
8). Fine-Tuning Classification Algorithms/.ipynb_checkpoints/Exercise 05-09-Lesson 08-checkpoint.ipynb
###Markdown Reading the data using pandas ###Code data= pd.read_csv('Churn_Modelling.csv') data.head(5) len(data) data.shape ###Output _____no_output_____ ###Markdown Scrubbing the data ###Code data.isnull().values.any() #It seems we have some missing values now let us explore what are the columns #having missing values data.isnull().any() ## it seems that we have missing values in Gender,age and EstimatedSalary data[["EstimatedSalary","Age"]].describe() data.describe() #### It seems that HasCrCard has value as 0 and 1 hence needs to be changed to category data['HasCrCard'].value_counts() ## No of missing Values present data.isnull().sum() ## Percentage of missing Values present round(data.isnull().sum()/len(data)*100,2) ## Checking the datatype of the missing columns data[["Gender","Age","EstimatedSalary"]].dtypes ###Output _____no_output_____ ###Markdown There are three ways to impute missing values: 1. Droping the missing values rows 2. Fill missing values with a test stastics 3. Predict the missing values using ML algorithm ###Code ### Filling the missing value with the mean of the values mean_value=data['EstimatedSalary'].mean() data['EstimatedSalary']=data['EstimatedSalary'].fillna(mean_value) data['Gender'].value_counts() ### Since it seems that the Gender is a categorical field therefore ### we will fill the values with the 0 since its the most occuring number data['Gender']=data['Gender'].fillna(data['Gender'].value_counts().idxmax()) mode_value=data['Age'].mode() data['Age']=data['Age'].fillna(mode_value[0]) ##checking for any missing values data.isnull().any() ###Output _____no_output_____ ###Markdown Renaming the columns ###Code # We would want to rename some of the columns data = data.rename(columns={ 'CredRate': 'CreditScore', 'ActMem' : 'IsActiveMember', 'Prod Number': 'NumOfProducts', 'Exited':'Churn' }) data.columns ###Output _____no_output_____ ###Markdown We would also like to move the churn columnn to the extreme right and drop the customer ID ###Code data.drop(labels=['CustomerId'], axis=1,inplace = True) column_churn = data['Churn'] data.drop(labels=['Churn'], axis=1,inplace = True) data.insert(len(data.columns), 'Churn', column_churn.values) data.columns ###Output _____no_output_____ ###Markdown Changing the data type ###Code # Convert these variables into categorical variables data["Geography"] = data["Geography"].astype('category') data["Gender"] = data["Gender"].astype('category') data.dtypes ###Output _____no_output_____ ###Markdown Exploring the data Statistical Overview ###Code data['Churn'].value_counts(0) data['Churn'].value_counts(1)*100 data.describe() summary_churn = data.groupby('Churn') summary_churn.mean() summary_churn.median() corr = data.corr() plt.figure(figsize=(15,8)) sns.heatmap(corr, xticklabels=corr.columns.values, yticklabels=corr.columns.values,annot=True) corr ###Output _____no_output_____ ###Markdown Visualization ###Code f, axes = plt.subplots(ncols=3, figsize=(15, 6)) sns.distplot(data.EstimatedSalary, kde=True, color="darkgreen", ax=axes[0]).set_title('EstimatedSalary') axes[0].set_ylabel('No of Customers') sns.distplot(data.Age, kde=True, color="darkblue", ax=axes[1]).set_title('Age') axes[1].set_ylabel('No of Customers') sns.distplot(data.Balance, kde=True, color="maroon", ax=axes[2]).set_title('Balance') axes[2].set_ylabel('No of Customers') plt.figure(figsize=(15,4)) p=sns.countplot(y="Gender", hue='Churn', data=data,palette="Set2") legend = p.get_legend() legend_txt = legend.texts legend_txt[0].set_text("No Churn") legend_txt[1].set_text("Churn") p.set_title('Customer Churn Distribution by Gender') plt.figure(figsize=(15,4)) p=sns.countplot(x='Geography', hue='Churn',data=data, palette="Set2") legend = p.get_legend() legend_txt = legend.texts legend_txt[0].set_text("No Churn") legend_txt[1].set_text("Churn") p.set_title('Customer Geography Distribution') plt.figure(figsize=(15,4)) p=sns.countplot(x='NumOfProducts', hue='Churn',data=data, palette="Set2") legend = p.get_legend() legend_txt = legend.texts legend_txt[0].set_text("No Churn") legend_txt[1].set_text("Churn") p.set_title('Customer Distribution by Product') plt.figure(figsize=(15,4)) ax=sns.kdeplot(data.loc[(data['Churn'] == 0),'Age'] , color=sns.color_palette("Set2")[0],shade=True,label='no churn') ax=sns.kdeplot(data.loc[(data['Churn'] == 1),'Age'] , color=sns.color_palette("Set2")[1],shade=True, label='churn') ax.set(xlabel='Customer Age', ylabel='Frequency') plt.title('Customer Age - churn vs no churn') plt.figure(figsize=(15,4)) ax=sns.kdeplot(data.loc[(data['Churn'] == 0),'Balance'] , color=sns.color_palette("Set2")[0],shade=True,label='no churn') ax=sns.kdeplot(data.loc[(data['Churn'] == 1),'Balance'] , color=sns.color_palette("Set2")[1],shade=True, label='churn') ax.set(xlabel='Customer Balance', ylabel='Frequency') plt.title('Customer Balance - churn vs no churn') plt.figure(figsize=(15,4)) ax=sns.kdeplot(data.loc[(data['Churn'] == 0),'CreditScore'] , color=sns.color_palette("Set2")[0],shade=True,label='no churn') ax=sns.kdeplot(data.loc[(data['Churn'] == 1),'CreditScore'] , color=sns.color_palette("Set2")[1],shade=True, label='churn') ax.set(xlabel='CreditScore', ylabel='Frequency') plt.title('Customer CreditScore - churn vs no churn') plt.figure(figsize=(16,4)) p=sns.barplot(x='NumOfProducts',y='Balance',hue='Churn',data=data, palette="Set2") p.legend(loc='upper right') legend = p.get_legend() legend_txt = legend.texts legend_txt[0].set_text("No Churn") legend_txt[1].set_text("Churn") p.set_title('No of Product VS Balance') ###Output _____no_output_____ ###Markdown Feature selection ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split data.dtypes ### Encoding the categorical variables data["Geography"] = data["Geography"].astype('category').cat.codes data["Gender"] = data["Gender"].astype('category').cat.codes target = 'Churn' X = data.drop('Churn', axis=1) y=data[target] X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.15, random_state=123, stratify=y) forest=RandomForestClassifier(n_estimators=500,random_state=1) forest.fit(X_train,y_train) importances=forest.feature_importances_ features = data.drop(['Churn'],axis=1).columns indices = np.argsort(importances)[::-1] plt.figure(figsize=(15,4)) plt.title("Feature importances using Random Forest") plt.bar(range(X_train.shape[1]), importances[indices], color="r", align="center") plt.xticks(range(X_train.shape[1]), features[indices], rotation='vertical',fontsize=15) plt.xlim([-1, X_train.shape[1]]) plt.show() ###Output _____no_output_____ ###Markdown Model Fitting ###Code ### From the feature selection let us take only the top 6 features import statsmodels.api as sm top5_features = ['Age','EstimatedSalary','CreditScore','Balance','NumOfProducts'] logReg = sm.Logit(y_train, X_train[top5_features]) logistic_regression = logReg.fit() logistic_regression.summary logistic_regression.params # Create function to compute coefficients coef = logistic_regression.params def y (coef,Age,EstimatedSalary,CreditScore,Balance,NumOfProducts) : return coef[0]*Age+ coef[1]*EstimatedSalary+coef[2]*CreditScore+coef[1]*Balance+coef[2]*NumOfProducts import numpy as np #A customer having below attributes #Age: 50 #EstimatedSalary: 100,000 #CreditScore: 600 #Balance: 100,000 #NumOfProducts: 2 #would have 38% chance of churn y1 = y(coef, 50, 100000, 600,100000,2) p = np.exp(y1) / (1+np.exp(y1)) p ###Output _____no_output_____ ###Markdown Fitting Logistic Regression using Scikit Learn ###Code from sklearn.linear_model import LogisticRegression clf = LogisticRegression(random_state=0, solver='lbfgs').fit(X_train[top5_features], y_train) clf.predict(X_test[top5_features]) clf.predict_proba(X_test[top5_features]) clf.score(X_test[top5_features], y_test) ###Output _____no_output_____ ###Markdown Exercise 05-Lesson 08 Performing standardization ###Code from sklearn import preprocessing X_train[top5_features].head() scaler = preprocessing.StandardScaler().fit(X_train[top5_features]) scaler.mean_ scaler.scale_ X_train_scalar=scaler.transform(X_train[top5_features]) X_train_scalar X_test_scalar=scaler.transform(X_test[top5_features]) ###Output C:\Users\Debasish\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler. """Entry point for launching an IPython kernel. ###Markdown Exercise 06-Lesson 08 Performing Scaling ###Code min_max = preprocessing.MinMaxScaler().fit(X_train[top5_features]) min_max.min_ min_max.scale_ X_train_min_max=min_max.transform(X_train[top5_features]) X_test_min_max=min_max.transform(X_test[top5_features]) ###Output _____no_output_____ ###Markdown Exercise 07-Lesson 08 Normalization ###Code normalize = preprocessing.Normalizer().fit(X_train[top5_features]) normalize X_train_normalize=normalize.transform(X_train[top5_features]) X_test_normalize=normalize.transform(X_test[top5_features]) ###Output _____no_output_____ ###Markdown Exercise 08-Lesson 08 Model Evaluation ###Code from sklearn.model_selection import StratifiedKFold skf = StratifiedKFold(n_splits=10,random_state=1).split(X_train[top5_features].values,y_train.values) results=[] for i, (train,test) in enumerate(skf): clf.fit(X_train[top5_features].values[train],y_train.values[train]) fit_result=clf.score(X_train[top5_features].values[test],y_train.values[test]) results.append(fit_result) print('k-fold: %2d, Class Ratio: %s, Accuracy: %.4f' % (i,np.bincount(y_train.values[train]),fit_result)) print('accuracy for CV is:%.3f' % np.mean(results)) ###Output accuracy for CV is:0.790 ###Markdown Using Scikit Learn cross_val_score ###Code from sklearn.model_selection import cross_val_score results_cross_val_score=cross_val_score(estimator=clf,X=X_train[top5_features].values,y=y_train.values,cv=10,n_jobs=1) results_cross_val_score print('accuracy for CV is:%.3f' % np.mean(results_cross_val_score)) ###Output accuracy for CV is:0.790 ###Markdown Exercise 09-Lesson 08 Fine Tuning of Model Using Grid Search ###Code from sklearn import svm from sklearn.model_selection import GridSearchCV from sklearn.model_selection import StratifiedKFold parameters = [ {'kernel': ['linear'], 'C':[0.1, 1, 10]}, {'kernel': ['rbf'], 'gamma':[0.5, 1, 2], 'C':[0.1, 1, 10]}] clf = GridSearchCV(svm.SVC(), parameters, cv = StratifiedKFold(n_splits = 10)) clf.fit(X_train[top5_features], y_train) clf.fit(X_train[top5_features], y_train) print('best score train:', clf.best_score_) print('best parameters train: ', clf.best_params_) ###Output best score train: 0.7963529411764706 best parameters train: {'C': 0.1, 'gamma': 0.5, 'kernel': 'rbf'} ###Markdown Exercise 10-Lesson 08 Performance Metrics ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report,confusion_matrix,accuracy_score from sklearn import metrics clf_random = RandomForestClassifier(n_estimators=20, max_depth=None, min_samples_split=7, random_state=0) clf_random.fit(X_train[top5_features],y_train) y_pred=clf_random.predict(X_test[top5_features]) target_names = ['No Churn', 'Churn'] print(classification_report(y_test, y_pred, target_names=target_names)) cm = confusion_matrix(y_test, y_pred) cm_df = pd.DataFrame(cm, index = ['No Churn','Churn'], columns = ['No Churn','Churn']) plt.figure(figsize=(8,6)) sns.heatmap(cm_df, annot=True,fmt='g',cmap='Blues') plt.title('Random Forest \nAccuracy:{0:.3f}'.format(accuracy_score(y_test, y_pred))) plt.ylabel('True Values') plt.xlabel('Predicted Values') plt.show() ###Output _____no_output_____ ###Markdown Exercise 11-Lesson 08 ROC Curve ###Code from sklearn.metrics import roc_curve,auc fpr, tpr, thresholds = roc_curve(y_test, y_pred, pos_label=1) roc_auc = metrics.auc(fpr, tpr) plt.figure() plt.title('Receiver Operating Characteristic') plt.plot(fpr, tpr, label='%s AUC = %0.2f' % ('Random Forest', roc_auc)) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.ylabel('Sensitivity(True Positive Rate)') plt.xlabel('1-Specificity(False Positive Rate)') plt.title('Receiver Operating Characteristic') plt.legend(loc="lower right") plt.show() ###Output _____no_output_____
R_Notebooks/automl-for-wage-prediction.ipynb
###Markdown This notebook contains an example for teaching. Automatic Machine Learning with H2O AutoML using Wage Data from 2015 We illustrate how to predict an outcome variable Y in a high-dimensional setting, using the AutoML package *H2O* that covers the complete pipeline from the raw dataset to the deployable machine learning model. In last few years, AutoML or automated machine learning has become widely popular among data science community. We can use AutoML as a benchmark and compare it to the methods that we used in the previous notebook where we applied one machine learning method after the other. ###Code # load the H2O package library(h2o) # start h2o cluster h2o.init() # load the data set load("wage2015_subsample_inference.Rdata") # split the data set.seed(1234) training <- sample(nrow(data), nrow(data)*(3/4), replace=FALSE) train <- data[training,] test <- data[-training,] # start h2o cluster h2o.init() # convert data as h2o type train_h = as.h2o(train) test_h = as.h2o(test) # have a look at the data h2o.describe(train_h) y = 'lwage' x = setdiff(names(data), c('wage','occ2', 'ind2')) x # run AutoML for 10 base models and a maximal runtime of 100 seconds aml = h2o.automl(x=x,y = y, training_frame = train_h, leaderboard_frame = test_h, max_models = 10, seed = 1, max_runtime_secs = 100 ) # AutoML Leaderboard lb = aml@leaderboard print(lb, n = nrow(lb)) ###Output Warning message in .verify_dataxy(training_frame, x, y): "removing response variable from the explanatory variables" ###Markdown We see that two Stacked Ensembles are at the top of the leaderboard. Stacked Ensembles often outperform a single model. The out-of-sample (test) MSE of the leading model is given by ###Code aml@leaderboard$mse[1] ###Output _____no_output_____ ###Markdown The in-sample performance can be evaluated by ###Code aml@leader ###Output _____no_output_____ ###Markdown This is in line with our previous results. To understand how the ensemble works, let's take a peek inside the Stacked Ensemble "All Models" model. The "All Models" ensemble is an ensemble of all of the individual models in the AutoML run. This is often the top performing model on the leaderboard. ###Code model_ids <- as.data.frame(aml@leaderboard$model_id)[,1] model_ids grep("StackedEnsemble_AllModels", model_ids, value = TRUE)[1] # Get the "All Models" Stacked Ensemble model se <- h2o.getModel(grep("StackedEnsemble_AllModels", model_ids, value = TRUE)[1]) se # Get the Stacked Ensemble metalearner model metalearner <- se@model$metalearner_model metalearner h2o.varimp(metalearner) ###Output _____no_output_____ ###Markdown The table above gives us the variable importance of the metalearner in the ensemble. The AutoML Stacked Ensembles use the default metalearner algorithm (GLM with non-negative weights), so the variable importance of the metalearner is actually the standardized coefficient magnitudes of the GLM. ###Code h2o.varimp_plot(metalearner) ###Output _____no_output_____ ###Markdown Generating Predictions Using Leader ModelWe can also generate predictions on a test sample using the leader model object. ###Code pred <- as.matrix(h2o.predict(aml@leader,test_h)) # make prediction using x data from the test sample head(pred) ###Output |======================================================================| 100% ###Markdown This allows us to estimate the out-of-sample (test) MSE and the standard error as well. ###Code y_test <- as.matrix(test_h$lwage) summary(lm((y_test-pred)^2~1))$coef[1:2] ###Output _____no_output_____ ###Markdown We observe both a lower MSE and a lower standard error compared to our previous results (see [here](https://www.kaggle.com/janniskueck/pm3-notebook-newdata)). ###Code h2o.shutdown(prompt = F) ###Output _____no_output_____
scriptdata/finalpscript.ipynb
###Markdown creating NVD Database data, scanning data dataframe ###Code nvddata=pd.read_csv('/home/Bushu/Documents/Enviroment/allen/data/nvdwithmissng.csv',header="infer") #nvddata=nvddata[nvddata.cvss_scorev3 != 'nov3'] nvddata=nvddata.iloc[:,[0,1,3,6,7,]] targetdata=pd.read_csv('/home/Bushu/Documents/Enviroment/allen/data/mainscan/targetresult.csv') threatdata=pd.read_csv('/home/Bushu/Documents/Enviroment/allen/data/mainscan/threatresult.csv') cwe_patterns=pd.read_csv('/home/Bushu/Documents/Enviroment/allen/data/cwelist.csv') # data=pd.read_csv('/home/Bushu/Documents/Final Paper/data/nvd2021e.csv',header="infer") #https://www.cvedetails.com/vulnerability-list/year-2007/month-1/January.html len(nvddata[nvddata.cvss_scorev3 != "nov3"]) ###Output _____no_output_____ ###Markdown Data cleaning lines ###Code targetdata.fillna(0, inplace=True) # nvddata.cvss_scorev3.replace(to_replace=["CVSS:3.1","CVSS:3.0"],value="nov3",inplace=True) cwe_patterns=cwe_patterns.iloc[:,[0,6,21]] #open sour e vulnerablity database data nvddata.head(10) # CWE CAPECA data cwe_patterns.head(10) # reconnassiance synthetic data threatdata.head(10) targetdata.head(10) ###Output _____no_output_____ ###Markdown Probablity calculation formula from ("Vulnerblity Connectivity") by Heng Wei Zhang ###Code def Vulenerablity_connector(vec): if vec != "nov3": v3=vec.split('/') v3_dic={ x.split(':')[0]:x.split(':')[1] for x in v3 } # this values are taken form cvss 3.1 metric score table # Attack Vector = N=0.85 A=0.62 L=0.55 P=0.22 # Attack Complexity = H=0.77 L=0.44 # Priviledges Required N=0.85 scope changed(L=0.68,H=0.5) or scope not changed(L=0.62,H=0.27) # scope C is changed and U is not changed # math. sqrt(x) av_value={"N":0.85, "A":0.62 ,"L":0.55 ,"P":0.22} ac_value={"H":0.77, "L":0.44 } pr_value={ "C":{"L":0.68,"H":0.5, "N":0.85,}, "U":{"L":0.62,"H":0.27, "N":0.85,}} probablity=(pr_value[v3_dic["S"]][v3_dic["PR"]]/(pr_value[v3_dic["S"]][v3_dic["PR"]]+math.sqrt(av_value[v3_dic["AV"]]*ac_value[v3_dic["AC"]]))) return probablity return "unknown" nvddata["Vulnerablity_connector"]=nvddata.cvss_scorev3.map(Vulenerablity_connector) ###Output _____no_output_____ ###Markdown Scanning result filter and miscellaneous functions ###Code # this is target version,os and device list #////////////////////////////////////////////////////// ver=targetdata.version.unique() os=targetdata.OSInfo.unique() os=[x for x in os if x != 0] dev=targetdata.Device.unique() dev=[x for x in dev if x != 0] #///////////////////////////////////////////////////////// # this is threat version,os and device list # ///////////////////////////////////////////////////// th_ver=threatdata.version.unique() th_os=threatdata.OSInfo.unique() th_os=[x for x in th_os if x != 0] th_dev=threatdata.Device.unique() th_dev=[x for x in th_dev if x != 0] # ///////////////////////////////////////////////////////////////// # This is target session filter # /////////////////////////////////////// def test_target_service(description): if re.search(f'({"|".join(os)})',description): return True return False def test_target_device(description): if re.search(f'({"|".join(dev)})',description): return True return False def test_target_version(description): if re.search(f'({"|".join(ver)})',description): return True return False # //////////////////////////////////////// # This is threat source version filter # ///////////////////////////////////////////// def test_threat_service(description): if re.search(f'({"|".join(th_os)})',description): return True return False def test_threat_device(description): if re.search(f'({"|".join(th_dev)})',description): return True return False def test_threat_version(description): if re.search(f'({"|".join(th_ver)})',description): return True return False # /////////////////////////////////////////////////////// #scoring func### Availablity Severity distributioniton # ///////////////////////////////////////////////////// def cvs3_sc(vector): score=CVSS3(vector) return score.scores()[0] #distribution function #this catagoraizaitn is based on CVSS3 specification table # ////////////////////////////////////////////////////// def compare(score): if score == 0: return "None" elif 0.1 <= score <= 3.9: return "Low" elif 4<= score <=6.9: return "Medium" elif 7<=score<=8.9: return "High" elif 9<=score<=10: return "Critical" # /////////////////////////////////////////////////// ###Output _____no_output_____ ###Markdown Filtering Possible CVEs in target and threat source ###Code # Running Filter Map functions for target # /////////////////////////////// # please note that order maters here when executing nvddata['target_possible_v']=nvddata.description.map(test_target_version) nvddata['target_possible_s']=nvddata.description.map(test_target_service) nvddata['target_possible_dev']=nvddata.description.map(test_target_device) # Running Filter Map functions for threat source # /////////////////////////////// # please note that order maters here when executing nvddata['threat_possible_v']=nvddata.description.map(test_threat_version) nvddata['threat_possible_s']=nvddata.description.map(test_threat_service) #nvddata['threat_possible_dev']=nvddata.description.map(test_threat_device) target_cves=nvddata[(nvddata.target_possible_v == True) | (nvddata.target_possible_s == True) | (nvddata.target_possible_dev == True) ] threat_cves=nvddata[(nvddata.threat_possible_v == True) | (nvddata.threat_possible_s == True)] ###Output _____no_output_____ ###Markdown Generating The List ###Code target_list_with_nov3=target_cves[target_cves.cvss_scorev3 == "nov3"] target_list_with_score=target_cves[target_cves.cvss_scorev3 != "nov3"] threat_list_with_nov3=threat_cves[threat_cves.cvss_scorev3 == "nov3"] threat_list_with_score=threat_cves[threat_cves.cvss_scorev3 != "nov3"] len(target_cves) len(threat_cves) ###Output _____no_output_____ ###Markdown Combination of the lists ###Code #genarated_combination genarated_combination=list(product(threat_list_with_score.cve_number,target_list_with_score.cve_number)) chained_list=pd.DataFrame({'threat_target':genarated_combination},columns=['threat_target']) print(f"length of target cve is : {len(target_list_with_score.cve_number)}") print(f"length of threat cve is : {len(threat_list_with_score.cve_number)}") # dictionary of connector,score and cwe id nvd_score_dic=dict(zip(nvddata.cve_number,nvddata.cvss_scorev3)) connector_dic=dict(zip(nvddata['cve_number'],nvddata['Vulnerablity_connector'])) cweid_dic=dict(zip(nvddata['cve_number'],nvddata['cwe_number'])) #calculating combined vulnerablity connectivity probablity def combined_connector(v3): return connector_dic[v3[0]]*connector_dic[v3[1]] #connectivity operator as per zhang def zhang_connectivity_op(th_targ): v3t=nvd_score_dic.get(th_targ[0]) v3targ=nvd_score_dic.get(th_targ[1]) v3t_dic={ x.split(':')[0]:x.split(':')[1] for x in v3t.split('/') } v3targ_dic={ x.split(':')[0]:x.split(':')[1] for x in v3targ.split('/') } av_value={"N":0.85, "A":0.62 ,"L":0.55 ,"P":0.22} ac_value={"H":0.77, "L":0.44 } pr_value={ "C":{"L":0.68,"H":0.5, "N":0.85,}, "U":{"L":0.62,"H":0.27, "N":0.85,}} probablity=pr_value[v3t_dic["S"]][v3t_dic["PR"]]/(pr_value[v3t_dic["S"]][v3t_dic["PR"]]+math.sqrt(ac_value[v3t_dic["AC"]]*av_value[v3targ_dic["AV"]])) return probablity #probablity=(pr_value[v3_dic["S"]][v3_dic["PR"]]/(pr_value[v3_dic["S"]][v3_dic["PR"]]+math.sqrt(av_value[v3_dic["AV"]]*ac_value[v3_dic["AC"]]))) #calculating chainned score def combined_chainned_score(v3): if nvd_score_dic.get(v3[0]): threat_lis=nvd_score_dic.get(v3[0]).split('/') if nvd_score_dic.get(v3[1]): target_lis=nvd_score_dic.get(v3[1]).split('/') if nvd_score_dic.get(v3[1]) and nvd_score_dic.get(v3[0]): threat={x.split(':')[0]:x.split(':')[1] for x in threat_lis} target={x.split(':')[0]:x.split(':')[1] for x in target_lis} # this values are taken form cvss 3.1 metric score table # Attack Vector = N=0.85 A=0.62 L=0.55 P=0.22 # confidentiality/Integerity/Availablity H=0.56,L=0.22,N=0 # User Interaction N=0.85 R=0.62 # Attack Complexity = H=0.77 L=0.44 # Priviledges Required N=0.85 scope changed(L=0.68,H=0.5) or scope not changed(L=0.62,H=0.27) #print(threat) scope=threat['S'] if threat['S'] != target['S']: scope="C" ui=threat['UI'] if threat['UI'] != target['UI']: ui="N" ac=threat['AC'] if threat['AC'] != target['AC']: ac="L" if threat['C'] == "H" or target['C'] == "H": conf="H" elif threat['C'] == "L" or target['C'] == "L" and (threat['C'] != "H" or target['C'] != "H"): conf="L" else: conf="N" if threat['I'] == "H" or target['I'] == "H": integ="H" elif threat['I'] == "L" or target['I'] == "L" and (threat['I'] != "H" or target['I'] != "H"): integ="L" else: integ="N" if threat['A'] == "H" or target['A'] == "H": avail="H" elif threat['A'] == "L" or target['A'] == "L" and (threat['A'] != "H" or target['A'] != "H"): avail="L" else: avail="N" if threat['PR'] == "N" or target['PR'] == "N": pr="N" elif threat['PR'] == "L" or target['PR'] == "L" and (threat['PR'] != "N" or target['PR'] != "N"): pr="L" else: pr="H" #NALP if threat['AV'] == "N" or target['AV'] == "N": av="N" elif threat['AV'] == "A" or target['AV'] == "A" and (threat['AV'] != "N" or target['AV'] != "N"): av="A" elif threat['AV'] == "L" or target['AV'] == "L" and (threat['AV'] != "A" or target['AV'] != "A" or threat['AV'] != "N" or target['AV'] != "N"): av="L" else: av="P" chained_vector= f"CVSS:3.1/AV:{av}/AC:{ac}/PR:{pr}/UI:{ui}/S:{scope}/C:{conf}/I:{integ}/A:{avail}" # return CVSS3(chained_vector).scores()[0] #return chained_vector else: return "unkown" #checking related weakness from the CWE related pattern list cwe_patterns.Related_Attack_Patterns.fillna("nothing",inplace=True) def cwe_related_patterns(h): if h != "nothing": return h.strip().split("::") cwe_patterns['related_list']=cwe_patterns.Related_Attack_Patterns.map(cwe_related_patterns) pattern_litmus_dic=dict(zip(cwe_patterns.CWE_ID,cwe_patterns.related_list)) def checking_related_pattern(vec): if cweid_dic.get(vec[0]) or cweid_dic.get(vec[1]): if cweid_dic.get(vec[0]) in cweid_dic.get(vec[1]) or cweid_dic.get(vec[1]) in cweid_dic.get(vec[0]): return "probable relation" return "no information" return "no information" chained_list['vulnerablity_connector']=chained_list.threat_target.map(combined_connector) chained_list['cvssv3_chained_score']=chained_list.threat_target.map(combined_chainned_score) chained_list['relation_for_chainning']=chained_list.threat_target.map(checking_related_pattern) # chained_list["zhang_connector"]=chained_list.threat_target.map(combined_connector) # chained_list["zhang_connector"]=chained_list.threat_target.map(zhang_connectivity_op) ###Output _____no_output_____ ###Markdown Finally Generated csv file List ###Code def target_source_av(v3): if nvd_score_dic.get(v3[0]): threat_lis=nvd_score_dic.get(v3[0]).split('/') if nvd_score_dic.get(v3[1]): target_lis=nvd_score_dic.get(v3[1]).split('/') if nvd_score_dic.get(v3[1]) and nvd_score_dic.get(v3[0]): threat={x.split(':')[0]:x.split(':')[1] for x in threat_lis} target={x.split(':')[0]:x.split(':')[1] for x in target_lis} return f"{threat['AV']}-{target['AV']}" def filter_source_target_logic(x): if x.split('-')[0] == "P" or x.split('-')[1] == "P" or x =="A-A": return False; return True chained_list['av_source_target']=chained_list.threat_target.map(target_source_av) chained_list['chain_logic']=chained_list.av_source_target.map(filter_source_target_logic) # filtering with chainlogic values chained_list=chained_list[chained_list.chain_logic == True] chained_list.drop('chain_logic',1) print('nothing') #simple spliting funcitons def source_split(x): return x[0] def target_split(x): return x[1] def compare(score): if score == 0: return "None" elif 0.1 <= score <= 3.9: return "Low" elif 4<= score <=6.9: return "Medium" elif 7<=score<=8.9: return "High" elif 9<=score<=10: return "Critical" chained_list['severity']=chained_list.cvssv3_chained_score.map(compare) chained_list['source_cve']=chained_list.threat_target.map(source_split) chained_list['target_cve']=chained_list.threat_target.map(target_split) ###Output _____no_output_____ ###Markdown Sections following are sample ways of using generated list ###Code chained_list.vulnerablity_connector.describe() chained_list.cvssv3_chained_score.describe() chained_list.head(5) target_weakness=chained_list.groupby(['target_cve'])['severity'].value_counts().reset_index(name='severity_counts') target_weakness.head(10) # target_weakness_normalized=chained_list.groupby(['target_cve'])['severity'].value_counts(normalize=True).reset_index(name='severity_weights') # target_weakness_normalized.head(10) twn=chained_list.groupby(['target_cve'])['severity'].value_counts(normalize=True) twn cwd=dict(zip(twn.index,twn.values)) # target_weakness_connector=chained_list.groupby(['target_cve','severity'])['vulnerablity_connector'].mean().sort_values(ascending=False) twc=chained_list.groupby(['target_cve','severity'])['vulnerablity_connector'].mean() vcd=dict(zip(twc.index,twc.values)) tv3s=chained_list.groupby(['target_cve','severity'])['cvssv3_chained_score'].mean() # tv3s=chained_list.groupby(['target_cve'])['cvssv3_chained_score'].mean() tv3s # chained_list.groupby smd=dict(zip(tv3s.index,tv3s.values)) ptc=chained_list.groupby('target_cve')['relation_for_chainning'].value_counts() ptcd=dict(zip(ptc.index,ptc.values)) final_product=pd.DataFrame({'target_cve':chained_list.target_cve.unique()}) def weighted_average_connectivity(x): s=["High","Medium","Low","Critical"] return ((twc.get((x,s[0]),0)*twn.get((x,s[0]),0))+(twc.get((x,s[1]),0)*twn.get((x,s[1]),0))+(twc.get((x,s[2]),0)*twn.get((x,s[2]),0))+(twc.get((x,s[3]),0)*twn.get((x,s[3]),0)))/(twn.get((x,s[0]),0)+twn.get((x,s[1]),0)+twn.get((x,s[2]),0)+twn.get((x,s[3]),0)) def average_v3_score(x): s=["High","Medium","Low","Critical"] return ((smd.get((x,s[0]),0)*twn.get((x,s[0]),0))+(smd.get((x,s[1]),0)*twn.get((x,s[1]),0))+(smd.get((x,s[2]),0)*twn.get((x,s[2]),0))+(smd.get((x,s[3]),0)*twn.get((x,s[3]),0)))/(twn.get((x,s[0]),0)+twn.get((x,s[1]),0)+twn.get((x,s[2]),0)+twn.get((x,s[3]),0)) def pattern_count(x): return ptcd.get((x,"probable relation"),0) final_product['average_connectivity']=final_product.target_cve.map(weighted_average_connectivity) final_product['average_cvss3_score']=final_product.target_cve.map(average_v3_score) final_product['relation_count']=final_product.target_cve.map(pattern_count) plt.title("cwe realtion count distribution in target") sns.violinplot(final_product.relation_count) plt.savefig('relationcount.png') plt.title("vulnerablitiy connectivity distribution in target") sns.violinplot(final_product.average_connectivity) plt.savefig('acon_frequencey.png') plt.title("CVSS v3 chainned score distribution in target") sns.violinplot(final_product.average_cvss3_score) plt.savefig('av3socre_frequencey.png') #maximum vulnerablity connectivity can assume def max_connectivity(x): return x/(x+math.sqrt(0.2*0.44)) gt=np.arange(0.27, 0.85, 0.01) plt.plot(gt, max_connectivity(gt),'r--') #minimum vulnerablity connectitivity can assume def min_connectivity(x): return x/(x+math.sqrt(0.77*0.85)) gt=np.arange(0.27, 0.85, 0.01) plt.plot(gt, min_connectivity(gt),'r--') # Max connector value print(0.85/(0.85+math.sqrt(0.2*0.44))) # max connectivity value #min cinnector value print(0.27/(0.27+math.sqrt(0.7*0.85))) # min connectivity value ###Output 0.25927572567958324 ###Markdown Below is exporting chained list with cvss scores and vulnerablity_connectivity scores ###Code # data with scores and that do not have "Physical" Attack vector value in the target node final_product.head(10) # data that have "Physical" Attack vector value in the target node def filter_physical(x): sam=x.split("/") test_dic={x.split(':')[0]:x.split(':')[1] for x in sam} if test_dic.get('AV',None)=="P": return True return False targ_pd=pd.DataFrame({'cve_number':target_list_with_score.cve_number,'cvss_vector':target_list_with_score.cvss_scorev3},columns=['cve_number','cvss_vector']) targ_pd["physical"]=targ_pd.cvss_vector.map(filter_physical) # targ_pd['base_score']=targ_pd.cvss_vector.map(cvs3_sc) targ_pd[targ_pd.physical != False] # print(CVSS3('CVSS:3.1/AV:P/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H').scores()[0]) threat_list_with_score[threat_list_with_score.cve_number == "CVE-2005-0197"] # weakness in target node with no CVSS scores target_list_with_nov3.iloc[:,[0,3]].head(10) ###Output _____no_output_____ ###Markdown Some Numeric Evaluation of Results ###Code print(f"length of target cve is : {len(target_cves.cve_number)}") print(f"length of threat cve is : {len(threat_cves.cve_number)}") print(f"length of target cve with cvss scores is : {len(target_list_with_score.cve_number)}") print(f"length of threat cve with cvss scores is : {len(threat_list_with_score.cve_number)}") len(genarated_combination) # genarated_combination=list(product(threat_list_with_score.cve_number,target_list_with_score.cve_number)) # three_stage_threat_path=(list(product(threat_list_with_score.cve_number,threat_list_with_score.cve_number,target_list_with_score.cve_number))) three_stage_threat_path=(list(product(threat_list_with_score.cve_number,threat_list_with_score.cve_number,target_list_with_score.cve_number))) print(f" The single stage paths that can be formed are {len(genarated_combination)}") print(f" The three stage paths that can be formed are {len(three_stage_threat_path)}") # if multistage path generation for risk assessment was employed len(final_product[final_product.average_cvss3_score > 4]) # interms of performance of the approach # real explotion should be conducted and compare results ###Output _____no_output_____ ###Markdown Finally Exporting to CSV ###Code # exporting to cvss # final_product.to_csv("finalproduct.csv",index=False,encoding="utf-8") ###Output _____no_output_____
Spacy - CoNLL.ipynb
###Markdown Training an NER with spaCy on the CoNLL dataset Converting data to json structures so it can be used by Spacy ###Code !mkdir spacyNER_data !python3 -m spacy convert "CoNLL - 2003/en/train.txt" spacyNER_data -c ner !python3 -m spacy convert "CoNLL - 2003/en/test.txt" spacyNER_data -c ner !python3 -m spacy convert "CoNLL - 2003/en/valid.txt" spacyNER_data -c ner ###Output  Generated output file spacyNER_data/train.txt.json Created 1 documents  Generated output file spacyNER_data/test.txt.json Created 1 documents  Generated output file spacyNER_data/valid.txt.json Created 1 documents ###Markdown For example : ###Code !echo "BEFORE : (CoNLL - 2003/en/train.txt)" !head "CoNLL - 2003/en/train.txt" -n 11 | tail -n 9 !echo "\nAFTER : (spacyNER_data/train.txt.json)" !head "spacyNER_data/train.txt.json" -n 64 | tail -n 49 ###Output BEFORE : (CoNLL - 2003/en/train.txt) EU NNP B-NP B-ORG rejects VBZ B-VP O German JJ B-NP B-MISC call NN I-NP O to TO B-VP O boycott VB I-VP O British JJ B-NP B-MISC lamb NN I-NP O . . O O AFTER : (spacyNER_data/train.txt.json) { "tokens":[ { "tag":"NNP", "ner":"U-ORG", "orth":"EU" }, { "tag":"VBZ", "ner":"O", "orth":"rejects" }, { "tag":"JJ", "ner":"U-MISC", "orth":"German" }, { "tag":"NN", "ner":"O", "orth":"call" }, { "tag":"TO", "ner":"O", "orth":"to" }, { "tag":"VB", "ner":"O", "orth":"boycott" }, { "tag":"JJ", "ner":"U-MISC", "orth":"British" }, { "tag":"NN", "ner":"O", "orth":"lamb" }, { "tag":".", "ner":"O", "orth":"." } ] }, ###Markdown Training the NER model with Spacy (CLI) ###Code !python3 -m spacy train en model spacyNER_data/train.txt.json spacyNER_data/valid.txt.json -G -T -P ###Output dropout_from = 0.2 by default dropout_to = 0.2 by default dropout_decay = 0.0 by default batch_from = 1 by default batch_to = 16 by default batch_compound = 1.001 by default max_doc_len = 5000 by default beam_width = 1 by default beam_density = 0.0 by default learn_rate = 0.001 by default optimizer_B1 = 0.9 by default optimizer_B2 = 0.999 by default optimizer_eps = 1e-08 by default L2_penalty = 1e-06 by default grad_norm_clip = 1.0 by default parser_hidden_depth = 1 by default parser_maxout_pieces = 2 by default token_vector_width = 128 by default hidden_width = 200 by default embed_size = 7000 by default history_feats = 0 by default history_width = 0 by default Itn. P.Loss N.Loss UAS NER P. NER R. NER F. Tag % Token % 0 0.000 2475.628 0.000 81.932 82.497 82.214 0.000 100.000 19837.3 0.0 1 0.000 24.277 0.000 85.908 86.486 86.196 0.000 100.000 19685.2 0.0 2 0.000 14.419 0.000 87.013 87.159 87.086 0.000 100.000 20357.2 0.0 3 0.000 11.147 0.000 87.070 87.832 87.450 0.000 100.000 20438.6 0.0 65%|█████████████████████▌ | 133621/204567 [01:28<00:46, 1536.90it/s] ###Markdown Evaluating the model with test data set (`spacyNER_data/test.txt.json`) On Trained model (`model/model6`) ###Code # !mkdir result !python3 -m spacy evaluate model/model6 spacyNER_data/test.txt.json -dp result # !python -m spacy evaluate model/model-final data/test.txt.json -dp result ###Output /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88 return f(*args, **kwds) /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176 return f(*args, **kwds) /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88 return f(*args, **kwds) /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176 return f(*args, **kwds) Results Time 2.21 s NER P 76.84 NER F 77.68 Words 46666 POS 0.00 UAS 0.00 LAS 0.00 TOK 100.00 Words/s 21098 NER R 78.54  Generated 25 parses as HTML result ###Markdown View visualisation of entities detected by (`model/model6`) with displaCy [here](http://vishalgupta.me/IntEnt/result/entities.html) Pretrained model (`en`) ###Code !python3 -m spacy evaluate en spacyNER_data/test.txt.json -dp pretrained_result ###Output /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88 return f(*args, **kwds) /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176 return f(*args, **kwds) /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88 return f(*args, **kwds) /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192, got 176 return f(*args, **kwds) Results LAS 0.00 TOK 100.00 NER F 6.53 Time 8.02 s NER P 5.29 Words/s 5821 UAS 0.00 POS 86.99 NER R 8.55 Words 46666  Generated 25 parses as HTML pretrained_result
Legacy/.ipynb_checkpoints/Library_Collection_EDA_pipeline(1)-checkpoint.ipynb
###Markdown Machine Learning for Demand Forecasting Use case - predicting the demand for items at a libary ###Code # sfOptions = { # "sfURL" : "datalytyx.east-us-2.azure.snowflakecomputing.com", # "sfAccount" : "datalytyx", # "sfUser" : "WILLHOLTAM", # "sfPassword" : "04MucSfLV", # "sfRole": "DATABRICKS", # "sfDatabase" : "DATABRICKS_DEMO", # "sfSchema" : "SEATTLE_LIBRARY", # "sfWarehouse" : "DATASCIENCE_WH" # } # SNOWFLAKE_SOURCE_NAME = "net.snowflake.spark.snowflake" # #spark.conf.set("spark.executor.cores",2) # df = spark.read.format(SNOWFLAKE_SOURCE_NAME) \ # .options(**sfOptions) \ # .option("query", """select * from library_collection_inventory where reportdate in ('2017-09-01T00:00:00','2017-10-01T00:00:00', '2017-11-01T00:00:00', '2017-12-01T00:00:00', '2018-01-01T00:00:00', '2018-01-01T00:00:00', '2018-02-01T00:00:00', '2018-02-01T00:00:00', '2018-03-01T00:00:00', '2018-04-01T00:00:00', '2018-05-01T00:00:00', '2018-06-01T00:00:00', '2018-07-01T00:00:00') """) \ # .load().limit(1000) # # Create a view or table # temp_table_name = "library_collection_inventory" # df.createOrReplaceTempView(temp_table_name) # Import Libraries import numpy as np import pandas as pd import nltk # Has to be added through Workspaces/ attach library to cluster import more_itertools import re import os import codecs import mpld3 from snowflake.sqlalchemy import URL from nltk.stem.snowball import SnowballStemmer from sqlalchemy import create_engine from sklearn.base import BaseEstimator, TransformerMixin from sklearn import feature_extraction from sklearn.cluster import KMeans from sklearn.metrics.pairwise import euclidean_distances from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import train_test_split from sklearn import preprocessing from sklearn.pipeline import Pipeline # df_pandas = df.toPandas() # Create pandas dataframe to work within python when in Databricks sf_account = "datalytyx" sf_user = "WILLHOLTAM" sf_pwd = "04MucSfLV" # sf_user = "CHRISSCHON" # sf_pwd = "UpsetSheep7" sf_role = "DATABRICKS" sf_db = "DATABRICKS_DEMO" sf_schema = "SEATTLE_LIBRARY" sf_wh = "DATASCIENCE_WH" sf_region = "east-us-2.azure" engine = create_engine(URL( user = sf_user, password = sf_pwd, account = sf_account, region = sf_region, database = sf_db, schema = sf_schema, warehouse = sf_wh, role = sf_role, )) # engine = create_engine(connection_string) features = pd.read_sql_query("select * from library_collection_inventory where reportdate in ('2017-09-01T00:00:00','2017-10-01T00:00:00', '2017-11-01T00:00:00', '2017-12-01T00:00:00', '2018-01-01T00:00:00', '2018-01-01T00:00:00', '2018-02-01T00:00:00', '2018-02-01T00:00:00', '2018-03-01T00:00:00', '2018-04-01T00:00:00', '2018-05-01T00:00:00', '2018-06-01T00:00:00', '2018-07-01T00:00:00') limit 1000", engine) nltk.download('stopwords') # Common words to ignore nltk.download('punkt') # Punkt Sentence Tokenizer # df_pandas.dropna(axis=1, how='all') # Drop the columns where all of the elements are missing values # df_pandas.dropna(axis=0, how='any') # Drop the rows where any of the elements are missing values # load nltk's English stopwords as variable called 'stopwords' stopwords = nltk.corpus.stopwords.words('english') print(stopwords[:10]) # load nltk's SnowballStemmer as variabled 'stemmer' stemmer = SnowballStemmer("english") class NoneReplacer(TransformerMixin, BaseEstimator): """ Transformer changes Nonetype values into numpy NaN values. """ def __init__(self): pass def fit(self, X, y = None): # X_fitted = X.where(X == None) return self def transform(self, X): #["" if item is None else str(item) for item in X.select_dtypes(include='object')] assert isinstance(X, pd.DataFrame) X.fillna(value = pd.np.nan, inplace=True) return X initiated_class = NoneReplacer() initiated_class.fit(features) df_pandas_fitter = initiated_class.transform(features) # df_pandas_fitter class EmptyColumnRemover(TransformerMixin, BaseEstimator): """ Transformer drops empty columns """ def __init__(self): pass def fit(self, X, y = None): # has to take an optional y for pipelines """ Calculates the number of missing values that corresponds to the threshold. Detects and labels columns with equal to or greater than numbers of missing values than the threshold. """ self.drop_columns = features.isna().sum()[features.isna().sum() >= X.shape[0]].index # Calculates pd.series with column lables as indecies return self def transform(self, X): """ Drops columns containing empty values. """ assert isinstance(X, pd.DataFrame) return X.drop(columns = self.drop_columns) initiated_class = EmptyColumnRemover() initiated_class.fit(X=df_pandas_fitter) df_pandas_fitter1 = initiated_class.transform(df_pandas_fitter) # df_pandas_fitter1 class AnyNaNRowRemover(TransformerMixin, BaseEstimator): def __init__(self): pass def fit(self, X, y = None): # has to take an optional y for pipelines """ Calculates the number of missing values the corresponds to the threshold. Detects and labels columns with more missing values that the threshold. """ return self def transform(self, X): assert isinstance(X, pd.DataFrame) return X.dropna(axis=0, how='any') initiated_class = AnyNaNRowRemover() initiated_class.fit(df_pandas_fitter1) df_pandas_fitter2 = initiated_class.transform(df_pandas_fitter1) class TokenizeAndStemer(TransformerMixin, BaseEstimator): def __init__(self, default_column = 'subjects'): self.default_column = default_column pass def tokenize_and_stem(self, X): tokens = [word for sent in nltk.sent_tokenize(X) for word in nltk.word_tokenize(sent)] filtered_tokens = [] [filtered_tokens.append(token) if re.search('[a-zA-Z]', token) else token for token in tokens] stems = [stemmer.stem(t) for t in filtered_tokens] return stems def fit(self, X, y = None): return self def transform(self, X): totalvocab_stemmed = [] allwords_stemmed = [self.tokenize_and_stem(str(i)) for i in X[self.default_column].tolist()] #for each item in 'synopses', tokenize/stem totalvocab_stemmed.extend(allwords_stemmed) totalvocab_stemmed = list(more_itertools.collapse(totalvocab_stemmed)) return totalvocab_stemmed # How to implement in class above???? OR do I have to implement it in a separate class... # allwords_stemmed = [tokenize_and_stem(str(i)) for i in subjects] #for each item in 'synopses', tokenize/stem # totalvocab_stemmed.extend(allwords_stemmed) #extend the 'totalvocab_stemmed' list # totalvocab_stemmed = list(more_itertools.collapse(totalvocab_stemmed)) initiated_class = TokenizeAndStemer() initiated_class.fit(df_pandas_fitter2) df_pandas_fitter3 = initiated_class.transform(df_pandas_fitter2) class TokenizeOnly(TransformerMixin, BaseEstimator): def __init__(self, default_column = 'subjects'): self.default_column = default_column pass def tokenize_only(text): # first tokenize by sentence, then by word to ensure that punctuation is caught as it's own token tokens = [word.lower() for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)] filtered_tokens = [] # filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation) for token in tokens: if re.search('[a-zA-Z]', token): filtered_tokens.append(token) return filtered_tokens def fit(self, X, y = None): return self def transform(self, X): totalvocab_tokenized = [] allwords_tokenized = [self.tokenize_only(str(i)) for i in X[self.default_column].tolist()] #for each item in 'synopses', tokenize/stem totalvocab_tokenized.extend(allwords_tokenized) totalvocab_tokenized = list(more_itertools.collapse(totalvocab_tokenized)) return totalvocab_tokenized data_pipeline = Pipeline([ ('nr', NoneReplacer()), ('ecr', EmptyColumnRemover()), ('anrr', AnyNaNRowRemover()), ('tas', TokenizeAndStemer()), # ('to', TokenizeOnly()) ]) data_pipeline.fit_transform(X = features) # def tokenize_and_stem(X, default_column = 'subjects'): # tokens = [word for sent in nltk.sent_tokenize(X[default_column]) for word in nltk.word_tokenize(sent)] # filtered_tokens = [] # [filtered_tokens.append(token) if re.search('[a-zA-Z]', token) else token for token in tokens] # stems = [stemmer.stem(t) for t in filtered_tokens] # return stems # tokenize_and_stem(df_pandas_fitter2) # def tokenize_and_stem(text): # # first tokenize by sentence, then by word to ensure that punctuation is caught as it's own token # tokens = [word for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)] # filtered_tokens = [] # # filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation) # for token in tokens: # if re.search('[a-zA-Z]', token): # filtered_tokens.append(token) # stems = [stemmer.stem(t) for t in filtered_tokens] # return stems # tokenize_and_stem(df_pandas_fitter2['subjects']) # def tokenize_and_stem(text): # # first tokenize by sentence, then by word to ensure that punctuation is caught as it's own token # tokens = [word for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)] # filtered_tokens = [] # # filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation) # for token in tokens: # if re.search('[a-zA-Z]', token): # filtered_tokens.append(token) # stems = [stemmer.stem(t) for t in filtered_tokens] # return stems # allwords_stemmed = [tokenize_and_stem(str(i)) for i in subjects] #for each item in 'synopses', tokenize/stem # totalvocab_stemmed.extend(allwords_stemmed) #extend the 'totalvocab_stemmed' list # totalvocab_stemmed = list(more_itertools.collapse(totalvocab_stemmed)) # subjects = df_pandas['subjects'] # Define a tokenizer and stemmer which returns the set of stems in the text that it is passed # #class TextTransform() # def tokenize_and_stem(text): # # first tokenize by sentence, then by word to ensure that punctuation is caught as it's own token # tokens = [word for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)] # filtered_tokens = [] # # filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation) # for token in tokens: # if re.search('[a-zA-Z]', token): # filtered_tokens.append(token) # stems = [stemmer.stem(t) for t in filtered_tokens] # return stems # def tokenize_only(text): # # first tokenize by sentence, then by word to ensure that punctuation is caught as it's own token # tokens = [word.lower() for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)] # filtered_tokens = [] # # filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation) # for token in tokens: # if re.search('[a-zA-Z]', token): # filtered_tokens.append(token) # return filtered_tokens # totalvocab_stemmed = [] # totalvocab_tokenized = [] # allwords_stemmed = [tokenize_and_stem(str(i)) for i in subjects] #for each item in 'synopses', tokenize/stem # totalvocab_stemmed.extend(allwords_stemmed) #extend the 'totalvocab_stemmed' list # totalvocab_stemmed = list(more_itertools.collapse(totalvocab_stemmed)) # allwords_tokenized = [tokenize_only(str(i)) for i in subjects] # totalvocab_tokenized.extend(allwords_tokenized) # totalvocab_tokenized = list(more_itertools.collapse(totalvocab_tokenized)) # totalvocab_stemmed = [] # totalvocab_tokenized = [] # for i in subjects_c: # allwords_stemmed = tokenize_and_stem(str(i)) #for each item in 'synopses', tokenize/stem # totalvocab_stemmed.extend(allwords_stemmed) #extend the 'totalvocab_stemmed' list # allwords_tokenized = tokenize_only(str(i)) # totalvocab_tokenized.extend(allwords_tokenized) # totalvocab_tokenized # class ModelTransformer(TransformerMixin): # def __init__(self, model): # self.model = model # def fit(self, *args, **kwargs): # self.model.fit(*args, **kwargs) # return self # def transform(self, X, **transform_params): # return pd.DataFrame(self.model.predict(X)) # pipeline = Pipeline([ # ('cluster', ModelTransformer(KMeans_foo(3))), # ('binarize', LabelBinarizer()) # ]) # df_pandas.loc[:,'PUBLICATIONYEAR'] = df_pandas.loc[:,'PUBLICATIONYEAR'].str.extract(r'(^|)*([0-9]{4})\s*(|$)', expand=True) # _______________________________________________________________________________ # Functions for transforming the data # def processing(df): # df['PUBLICATIONYEAR'] = df['PUBLICATIONYEAR'].str.extract(r'(^|)*([0-9]{4})\s*(|$)', expand=True) # [str(item) for item in df['SUBJECTS'] if item is None] # processing(df_pandas) # def ext_date_fun(input, output): # output = input.str.extractstr.extract(r'(^|)*([0-9]{4})\s*(|$)', expand=True) # def fix_none_fun(sub): # [str(item) for item in sub if item is None] # _______________________________________________________________________________ # # Attempt at putting the functions into classes # class PublicationYearCleaner(object): # """Preprocessing: This class cleans the Publication Year Column""" # def __init__(self, data): # self.raw = data # def ext_date(self, pub_year): # pub_year = pub_year.str.extract(r'(^|)*([0-9]{4})\s*(|$)', expand=True) # class SubjectsCleaner(objct) # def fix_none(self, string): # if string is None: # return '' # return str(string) # # ________________________________________________________________________________ # df_pandas.loc[:,'PUBLICATIONYEAR'] = df_pandas.loc[:,'PUBLICATIONYEAR'].str.extract(r'(^|)*([0-9]{4})\s*(|$)', expand=True) # n = 1000 # Number of rows to analyse # titles = df_pandas.loc[0:n,'TITLE'].values.tolist() # subjects = df_pandas.loc[0:n,'SUBJECTS'].values.tolist() # author = df_pandas.loc[0:n, 'AUTHOR'].values.tolist() # class ModelTransformer(TransformerMixin): # def __init__(self, model): # self.model = model # def fit(self, *args, **kwargs): # self.model.fit(*args, **kwargs) # return self # def transform(self, X, **transform_params): # return pd.DataFrame(self.model.predict(X)) # df_pandas = df_pandas.join(pd.get_dummies(df_pandas.loc[:,'ITEMTYPE']), how='inner') # df_pandas = df_pandas.join(pd.get_dummies(df_pandas.loc[:,'ITEMCOLLECTION']), how='inner') # df_pandas = df_pandas.join(pd.get_dummies(df_pandas.loc[:,'ITEMLOCATION']), how='inner') # list(df_pandas.columns.values) # #use extend so it's a big flat list of vocab # totalvocab_stemmed = [] # totalvocab_tokenized = [] # for i in df_pandas['subjects_c']: # allwords_stemmed = tokenize_and_stem(str(i)) #for each item in 'synopses', tokenize/stem # totalvocab_stemmed.extend(allwords_stemmed) #extend the 'totalvocab_stemmed' list # allwords_tokenized = tokenize_only(str(i)) # totalvocab_tokenized.extend(allwords_tokenized) vocab_frame = pd.DataFrame({'words': totalvocab_tokenized}, index = totalvocab_stemmed) print('there are ' + str(vocab_frame.shape[0]) + ' items in vocab_frame') print(vocab_frame.head()) #define vectorizer parameters vectorizer_pipe = Pipeline(steps=[('tfidf', TfidfVectorizer(max_df=0.8, max_features=200000, min_df=5, stop_words='english', use_idf=True, tokenizer=tokenize_and_stem, ngram_range=(1,3)))]) %time tfidf_matrix = vectorizer_pipe.fit_transform(subjects) # fit the vectorizer to synopses print(tfidf_matrix.shape) terms = tfidf_vectorizer.get_feature_names() dist = 1 - euclidean_distances(tfidf_matrix) cluster_pipe = Pipeline(steps=[('cluster', KMeans(n_clusters = 5, random_state=3425))]) %time cluster_pipe.fit(tfidf_matrix) clusters = cluster_pipe.named_steps['cluster'].labels_.tolist() from sklearn.externals import joblib #uncomment the below to save your model #since I've already run my model I am loading from the pickle joblib.dump(cluster_pipe, 'doc_cluster.pkl') km = joblib.load('doc_cluster.pkl') clusters = cluster_pipe.named_steps['cluster'].labels_.tolist() books = { 'title': titles, 'author': author, 'subjects': subjects_c, 'cluster': clusters } frame = pd.DataFrame(books, index = [clusters] , columns = ['title', 'author', 'cluster']) # frame.columns frame.cluster.value_counts() #number of books per cluster (clusters from 0 to 4) from __future__ import print_function print(ind) print("Top terms per cluster:") print() #sort cluster centers by proximity to centroid order_centroids = cluster_pipe.named_steps['cluster'].cluster_centers_.argsort()[:, ::-1] for i in range(num_clusters): print("Cluster %d words:" % i, end='') for ind in order_centroids[i, :6]: #replace 6 with n words per cluster print(' %s' % vocab_frame.loc[terms[ind].split(' ')].values.tolist()[0][0].encode('utf-8', 'ignore'), end=',') print() #add whitespace print() #add whitespace for ind in order_centroids[i, :3]: #replace 6 with n words per cluster x += ' %s' % vocab_frame.loc[terms[ind].split(' ')].values.tolist()[0][0].encode('utf-8', 'ignore'), end=',' # print("Cluster %d titles:" % i, end='') # for title in frame.loc[i,'title'].values.tolist(): # print(' %s,' % title, end='') # import os # for os.path.basename # import matplotlib.pyplot as plt # import matplotlib as mpl # from sklearn.manifold import MDS # MDS() # # convert two components as we're plotting points in a two-dimensional plane # # "precomputed" because we provide a distance matrix # # we will also specify `random_state` so the plot is reproducible. # mds = MDS(n_components=2, dissimilarity="precomputed", random_state=1) # pos = mds.fit_transform(dist) # shape (n_components, n_samples) # xs, ys = pos[:, 0], pos[:, 1] dummies = pd.get_dummies(frame.cluster) dummies.columns = dummies.columns.astype(str) list(dummies.columns.values) # dummies.rename(index=str, columns={"0": "Juvenile Literature", "1": "Music (Country)", "2": "Mystery", "3": "Music (Rock)", "4": "Comic Books"}) dummies.columns = ["Drama / Film / Rock", "Juvinile Mystery", "United States / Biography", "Juvinile Literature / Biography", "Comic Books"] list(dummies.columns.values) #frame df_pandas_dummies = df_pandas.join(dummies, how='left') df_pandas_dummies list(df_pandas_dummies.columns.values) prediction = df_pandas.loc[(n+1):,:] X_train, X_test, y_train, y_test = train_test_split(df_pandas_dummies, dummies, test_size=0.2) print(X_train.shape, y_train.shape) print(X_test.shape, y_test.shape) ###Output _____no_output_____
Utkarsh_NLP_Subsampling.ipynb
###Markdown In this Notebook we will learn how to implement subsampling in a word2vec model Importing The Necessary Stuff ###Code import numpy as np from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import zipfile from collections import Counter import random ###Output _____no_output_____ ###Markdown Necessary Functions ###Code def create_lookup_tables(words): """ Create lookup tables for vocabulary :param words: Input list of words :return: Two dictionaries, vocab_to_int, int_to_vocab """ word_counts = Counter(words) # sorting the words from most to least frequent in text occurrence sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True) # create int_to_vocab dictionaries int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)} vocab_to_int = {word: ii for ii, word in int_to_vocab.items()} return vocab_to_int, int_to_vocab ###Output _____no_output_____ ###Markdown Downloading The Dataset ###Code dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(dataset_filename): with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar: urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename, pbar.hook) if not isdir(dataset_folder_path): with zipfile.ZipFile(dataset_filename) as zip_ref: zip_ref.extractall(dataset_folder_path) ###Output _____no_output_____ ###Markdown Reading words from the given file ###Code words=[] with open('data/text8') as f: # reading each line for line in f: # reading each word for word in line.split(): words.append(word) print(words[:30]) print("Total words: {}".format(len(words))) print("Unique words: {}".format(len(set(words)))) ###Output Total words: 17005207 Unique words: 253854 ###Markdown Creating Dictionaries for Simplicity This is done using `create_lookup_tables` function which we created above. ###Code vocab_to_int, int_to_vocab = create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] ###Output _____no_output_____ ###Markdown Subsampling Words that show up often such as "a", "an", "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word wi in the training set, we'll discard it with probability given by: where t is a threshold parameter and f(wi) is the frequency of word wi in the total dataset. ###Code threshold = 1e-5 word_counts = Counter(int_words) total_count = len(int_words) freqs = {word: count/total_count for word, count in word_counts.items()} p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts} train_words = [word for word in int_words if random.random() < (1 - p_drop[word])] sampled_words=[int_to_vocab[train_word] for train_word in train_words] ###Output _____no_output_____ ###Markdown Get Sampled words ###Code print(sampled_words[:30]) print("Total words: {}".format(len(train_words))) ###Output Total words: 4981605
notebooks/process.ipynb
###Markdown Inspecting AiiDA processes ###Code %%javascript IPython.OutputArea.prototype._should_scroll = function(lines) { return false; } %aiida import ipywidgets as ipw from IPython.display import clear_output from aiida.cmdline.utils.ascii_vis import format_call_graph import urllib.parse as urlparse from aiidalab_widgets_base import ProcessFollowerWidget, ProgressBarWidget, ProcessReportWidget from aiidalab_widgets_base import ProcessInputsWidget, ProcessOutputsWidget, ProcessCallStackWidget, RunningCalcJobOutputWidget url = urlparse.urlsplit(jupyter_notebook_url) url_dict = urlparse.parse_qs(url.query) if 'id' in url_dict: pk = int(url_dict['id'][0]) process = load_node(pk) else: process = None ###Output _____no_output_____ ###Markdown Process inputs. ###Code display(ProcessInputsWidget(process)) ###Output _____no_output_____ ###Markdown Process outputs. ###Code display(ProcessOutputsWidget(process)) follower = ProcessFollowerWidget( process, followers=[ProgressBarWidget(), ProcessReportWidget(), ProcessCallStackWidget(), RunningCalcJobOutputWidget()], path_to_root="../../", update_interval=2) display(follower) follower.follow(detach=True) ###Output _____no_output_____ ###Markdown Load data ###Code from dashboard.load import load import dashboard.config as config from dashboard.gheets import ManualFlow import json flow = ManualFlow() url = flow.get_url() with open('notebooks/url.json', 'w') as wf: json.dump(url, wf) flow.put_code("4/1AY0e-g5XNIzd3u-AMkZxqjo4XaitG6m2vrizO2od2U8FCi6pOy6sZ7TI4Fo") creds = flow.get_google_token() data = load( credentials=creds, pomodoros_spreadsheet_id=config.POMODOROS_SPREADSHEET_ID, pomodoros_range=config.POMODOROS_RANGE, activities_spreadsheet_id=config.ACTIVITIES_SPREADSHEET_ID, activities_range=config.ACTIVITIES_RANGE ) data.df.head() ###Output _____no_output_____ ###Markdown Process data ###Code from dashboard.process import compute_weekly_stats weekly_stats = compute_weekly_stats(data) weekly_stats.df.head() ###Output _____no_output_____ ###Markdown Get current week ###Code x = weekly_stats.df.loc[weekly_stats.df.Week == weekly_stats.df.Week.max(), :] import datetime today = datetime.date.today() x.from_date.iloc[0] <= today <= x.to_date.iloc[0] x ###Output _____no_output_____ ###Markdown Get sliding window ###Code current = x.Week.iloc[0] weekly_stats.df.loc[ (current - weekly_stats.df.Week <= 3) & (current - weekly_stats.df.Week > 0), :] ###Output _____no_output_____ ###Markdown Get zone ###Code today today.weekday() datetime.date(2021, 4, 5).weekday() 7 - today.weekday() ###Output _____no_output_____ ###Markdown Suggested action ###Code data.df.loc[data.df.Date == pd.to_datetime(today), :] today import pandas as pd pd.to_datetime(today) data.df.info() weekly_stats.df.loc[(weekly_stats.df.Week > 3) & (weekly_stats.df.Week < 14),['Week', 'done']].median() ###Output _____no_output_____ ###Markdown Point Processes**Author: Serge Rey and Wei Kang ** IntroductionOne philosophy of applying inferential statistics to spatial data is to think in terms of spatial processes and their possible realizations. In this view, an observed map pattern is one of the possible patterns that might have been generated by a hypothesized process. In this notebook, we are going to regard point patterns as the outcome of point processes. There are three major types of point process, which will result in three types of point patterns:* [Random Patterns](Random-Patterns)* [Clustered Patterns](Clustered-Patterns)* [Regular Patterns](Regular-Patterns)We will investigate how to generate these point patterns via simulation (Data Generating Processes (DGP) is the correponding point process), and inspect how these resulting point patterns differ from each other visually. In [Quadrat statistics notebook](Quadrat_statistics.ipynb) and [distance statistics notebook](distance_statistics.ipynb), we will adpot some statistics to infer whether it is a [Complete Spaital Randomness](https://en.wikipedia.org/wiki/Complete_spatial_randomness) (CSR) process.A python file named "process.py" contains several point process classes with which we can generate point patterns of different types. ###Code from pointpats import PoissonPointProcess, PoissonClusterPointProcess, Window, poly_from_bbox, PointPattern import libpysal as ps from libpysal.cg import shapely_ext %matplotlib inline import numpy as np #import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Random PatternsRandom point patterns are the outcome of CSR. CSR has two major characteristics:1. Uniform: each location has equal probability of getting a point (where an event happens)2. Independent: location of event points are independentIt usually serves as the null hypothesis in testing whether a point pattern is the outcome of a random process.There are two types of CSR:* $N$-conditioned CSR: $N$ is fixed * Given the total number of events $N$ occurring within an area $A$, the locations of the $N$ events represent an independent random sample of $N$ locations where each location is equally likely to be chosen as an event.* $\lambda$-conditioned CSR: $N$ is randomly generated from a Poisson process. * The number of events occurring within a finite region $A$ is a random variable $\dot{N}$ following a Poisson distribution with mean $\lambda|A|$, with $|A|$ denoting area of $A$ and $\lambda$ denoting the intensity of the point pattern. * Given the total number of events $\dot{N}$ occurring within an area $A$, the locations of the $\dot{N}$ events represent an independent random sample of $\dot{N}$ locations where each location is equally likely to be chosen as an event. Simulating CSRWe are going to generate several point patterns (200 events) from CSR within Virginia state boundary. ###Code # open the virginia polygon shapefile va = ps.io.open(ps.examples.get_path("virginia.shp")) polys = [shp for shp in va] # Create the exterior polygons for VA from the union of the county shapes state = shapely_ext.cascaded_union(polys) # create window from virginia state boundary window = Window(state.parts) ###Output _____no_output_____ ###Markdown 1. Generate a point series from N-conditioned CSR ###Code # simulate a csr process in the same window (200 points, 1 realization) # by specifying "asPP" false, we can generate a point series # by specifying "conditioning" false, we can simulate a N-conditioned CSR np.random.seed(5) samples = PoissonPointProcess(window, 200, 1, conditioning=False, asPP=False) samples samples.realizations[0] # simulated event points # build a point pattern from the simulated point series pp_csr = PointPattern(samples.realizations[0]) pp_csr pp_csr.plot(window=True, hull=True, title='Random Point Pattern') pp_csr.n ###Output _____no_output_____ ###Markdown 2. Generate a point series from $\lambda$-conditioned CSR ###Code # simulate a csr process in the same window (200 points, 1 realization) # by specifying "asPP" false, we can generate a point series # by specifying "conditioning" True, we can simulate a lamda-conditioned CSR np.random.seed(5) samples = PoissonPointProcess(window, 200, 1, conditioning=True, asPP=False) samples samples.realizations[0] # simulated points # build a point pattern from the simulated point series pp_csr = PointPattern(samples.realizations[0]) pp_csr pp_csr.plot(window=True, hull=True, title='Random Point Pattern') pp_csr.n ###Output _____no_output_____ ###Markdown The simulated point pattern has $194$ events rather than the Possion mean $200$. 3. Generate a point pattern from N-conditioned CSR ###Code # simulate a csr process in the same window (200 points, 1 realization) # by specifying "asPP" True, we can generate a point pattern # by specifying "conditioning" false, we can simulate a N-conditioned CSR np.random.seed(5) samples = PoissonPointProcess(window, 200, 1, conditioning=False, asPP=True) samples pp_csr = samples.realizations[0] # simulated point pattern pp_csr pp_csr.plot(window=True, hull=True, title='Random Point Pattern') pp_csr.n ###Output _____no_output_____ ###Markdown 4. Generate a point pattern of size 200 from a $\lambda$-conditioned CSR ###Code # simulate a csr process in the same window (200 points, 1 realization) # by specifying "asPP" True, we can generate a point pattern # by specifying "conditioning" True, we can simulate a lamda-conditioned CSR np.random.seed(5) samples = PoissonPointProcess(window, 200, 1, conditioning=True, asPP=True) samples pp_csr = samples.realizations[0] # simulated point pattern pp_csr pp_csr.plot(window=True, hull=True, title='Random Point Pattern') pp_csr.n ###Output _____no_output_____ ###Markdown Clustered PatternsClustered Patterns are more grouped than random patterns. Visually, we can observe more points at short distances. There are two sources of clustering:* Contagion: presence of events at one location affects probability of events at another location (correlated point process)* Heterogeneity: intensity $\lambda$ varies with location (heterogeneous Poisson point process)We are going to focus on simulating correlated point process in this notebook. One example of correlated point process is Poisson cluster process. Two stages are involved in simulating a Poisson cluster process. First, parent events are simulted from a $\lambda$-conditioned or $N$-conditioned CSR. Second, $n$ offspring events for each parent event are simulated within a circle of radius $r$ centered on the parent. Offspring events are independently and identically distributed. 1. Simulate a Poisson cluster process of size 200 with 10 parents and 20 children within 0.5 units of each parent (parent events: $N$-conditioned CSR) ###Code np.random.seed(5) csamples = PoissonClusterPointProcess(window, 200, 10, 0.5, 1, asPP=True, conditioning=False) csamples csamples.parameters #number of total events for each realization csamples.num_parents #number of parent events for each realization csamples.children # number of children events centered on each parent event pp_pcp = csamples.realizations[0] pp_pcp pp_pcp.plot(window=True, hull=True, title='Clustered Point Pattern') #plot the first realization ###Output _____no_output_____ ###Markdown It is obvious that there are several clusters in the above point pattern. 2. Simulate a Poisson cluster process of size 200 with 10 parents and 20 children within 0.5 units of each parent (parent events: $\lambda$-conditioned CSR) ###Code import numpy as np np.random.seed(10) csamples = PoissonClusterPointProcess(window, 200, 10, 0.5, 1, asPP=True, conditioning=True) csamples csamples.parameters #number of events for the realization might not be equal to 200 csamples.num_parents #number of parent events for the realization, not equal to 10 csamples.children # number of children events centered on each parent event pp_pcp = csamples.realizations[0] pp_pcp.plot(window=True, hull=True, title='Clustered Point Pattern') ###Output _____no_output_____ ###Markdown 3. Simulate a Poisson cluster process of size 200 with 5 parents and 40 children within 0.5 units of each parent (parent events: $N$-conditioned CSR) ###Code np.random.seed(10) csamples = PoissonClusterPointProcess(window, 200, 5, 0.5, 1, asPP=True) pp_pcp = csamples.realizations[0] pp_pcp.plot(window=True, hull=True, title='Clustered Point Pattern') ###Output _____no_output_____ ###Markdown Load the JSON into dataframes ###Code from azure.storage.blob import ContainerClient from ipython_secrets import * import pandas as pd import numpy as np import os import pickle sas = os.environ.get('AZURE_SAS') if sas is None: sas = get_secret('AZURE_SAS') os.putenv('AZURE_SAS', sas) os.environ['AZURE_SAS'] = sas # Instantiate a new ContainerClient container_client = ContainerClient.from_container_url(sas) # So far we have 4 large JSON files with recipes. All-recipes one is a bit of a mess, we need to clean it later. recipes = ["processed/allrecipes/allrecipes-recipes.json","processed/bbc/bbccouk-recipes.json","processed/cookstr/cookstr-recipes.json","processed/epicurious/epicurious-recipes.json"] raw = [] for i in recipes: blob_client = container_client.get_blob_client(i) download_stream = blob_client.download_blob() df = pd.read_json(download_stream.readall(),lines=True, encoding="Latin-1") raw.append(df) print("Loaded " + i + ". We have " + str(df.columns.tolist()) + "\n") ###Output Loaded processed/allrecipes/allrecipes-recipes.json. We have ['author', 'cook_time_minutes', 'description', 'error', 'footnotes', 'ingredients', 'instructions', 'photo_url', 'prep_time_minutes', 'rating_stars', 'review_count', 'time_scraped', 'title', 'total_time_minutes', 'url'] Loaded processed/bbc/bbccouk-recipes.json. We have ['chef', 'chef_id', 'cooking_time_minutes', 'description', 'error', 'ingredients', 'instructions', 'instructions_detailed', 'photo_url', 'preparation_time_minutes', 'program', 'program_id', 'serves', 'time_scraped', 'title', 'total_time_minutes', 'url'] Loaded processed/cookstr/cookstr-recipes.json. We have ['chef', 'comment_count', 'contributors', 'cookbook', 'cookbook_publisher', 'cooking_method', 'copyright', 'cost', 'course', 'date_modified', 'description', 'dietary_considerations', 'difficulty', 'error', 'ingredients', 'ingredients_detailed', 'instructions', 'kid_friendly', 'make_ahead', 'makes', 'meal', 'occasion', 'photo_credit_name', 'photo_credit_site', 'photo_url', 'rating_count', 'rating_value', 'taste_and_texture', 'time_scraped', 'title', 'total_time', 'type_of_dish', 'url'] Loaded processed/epicurious/epicurious-recipes.json. We have ['id', 'dek', 'hed', 'pubDate', 'author', 'type', 'url', 'photoData', 'tag', 'aggregateRating', 'ingredients', 'prepSteps', 'reviewsCount', 'willMakeAgainPct', 'dateCrawled'] ###Markdown Lazy loading mechanism to avoid Blob storage ###Code with open('interim.pkl', 'wb') as f: pickle.dump(raw, f) with open('interim.pkl', 'rb') as f: raw = pickle.load(f) ###Output _____no_output_____ ###Markdown Clean up raw data ###Code ################################ Allrecipes Recipes ################################ allrecipes = raw[0] # change name of rating-stars to rating, and drop minutes from time variables allrecipes.rename({'rating_stars':'rating', 'cook_time_minutes':'cook_time', 'prep_time_minutes':'prep_time', 'total_time_minutes':'total_time'}, axis=1, inplace=True) # Fix some formatting allrecipes.description = allrecipes.description.str.replace('[','').str.replace(']','') allrecipes.footnotes = allrecipes.footnotes.apply(str).str.replace('[','').str.replace(']','') allrecipes.footnotes = allrecipes.footnotes.replace(r'\s+( +\.)|#',np.nan,regex=True).replace('',np.nan) allrecipes.description = allrecipes.description.replace(r'\s+( +\.)|#',np.nan,regex=True).replace('',np.nan) # Drop duplicates, somebody REALLY likes pizza allrecipes = allrecipes[allrecipes['title'] !="Johnsonville® Three Cheese Italian Style Chicken Sausage Skillet Pizza"] ################################ BBC CO UK Recipes ################################ bbcrecipes = raw[1] bbcrecipes = bbcrecipes.drop(['chef_id', 'instructions_detailed', 'program_id'], 1) bbcrecipes.rename({'cooking_time_minutes':'cook_time', 'preparation_time_minutes':'prep_time', 'total_time_minutes':'total_time','serves':'makes', 'chef': 'author','program':'tag'}, axis=1, inplace=True) ################################ COOKSTR Recipes ################################ cookstrecipes = raw[2] drop_cols = ['contributors', 'cookbook', 'cookbook_publisher', 'cooking_method', 'cost', 'course', 'dietary_considerations', 'difficulty', 'meal', 'occasion', 'taste_and_texture', 'type_of_dish','rating_count', 'comment_count', 'copyright', 'date_modified','ingredients_detailed','kid_friendly','make_ahead','photo_credit_name','photo_credit_site'] cookstrecipes = cookstrecipes.drop(drop_cols, 1) # Rename rating cookstrecipes.rename({'rating_value':'rating','chef':'author'}, axis=1, inplace=True) # Fill missing ratings with 0 cookstrecipes.rating.fillna(0,inplace=True) ################################ epicurious check ################################ epicuriousrecipes = raw[3] # Rename columns epicuriousrecipes.rename({'prepSteps':'instructions', 'aggregateRating':'rating', 'reviewsCount':'review_count','author':'chef'}, axis=1, inplace=True) # Change rating scale to 0-5 epicuriousrecipes.rating = epicuriousrecipes.rating*(5/4) epicuriousrecipes.rename(columns={'dateCrawled':'time_scraped', 'hed':'title','dek':'description'}, inplace=True) # Drop 100 missing ingredient recipes and other columns epicuriousrecipes = epicuriousrecipes.dropna(0) drop_cols = ['id', 'photoData', 'pubDate', 'type'] epicuriousrecipes = epicuriousrecipes.drop(drop_cols,1) epicuriousrecipes.url = "https://www.epicurious.com" + epicuriousrecipes.url epicuriousrecipes['author'] = [row[0]['name'] if len(row) > 0 else "" for row in epicuriousrecipes['chef']] ################################ Make full dataframe ################################ dfs = [allrecipes, bbcrecipes, cookstrecipes, epicuriousrecipes] df_all = pd.concat(dfs,0,sort=True,ignore_index=True) # Trash some columns because the data is not very reliable or interesting df_total = df_all.drop(['willMakeAgainPct','chef','tag','error'], 1) df_total = df_total.replace(0.0,np.nan) df_total.rename({'review_count':'reviews','time_scraped':'scraped'}) nulls = df_total.isnull().sum(axis=0) print(nulls.apply(lambda x: str(x)+str(' missing'))) print("\nDone!\nFull recipe dataset is ready!") print("Final shape: {:,} rows with {} columns".format(df_total.shape[0], df_total.shape[1])) ###Output author 1803 missing cook_time 64972 missing description 28 missing footnotes 108832 missing ingredients 0 missing instructions 0 missing makes 129976 missing photo_url 42993 missing prep_time 76477 missing rating 40051 missing review_count 41296 missing time_scraped 0 missing title 0 missing total_time 48334 missing url 0 missing dtype: object Done! Full recipe dataset is ready! Final shape: 144,551 rows with 15 columns ###Markdown Tidying up the ingredientsWe need to clean up the ingredients. This has to be done in two ways:* Split them up into quantities and items in a dict* Convert them from list into dict ###Code fixed_ingredients = df_total.copy() def fix_ingredient(ingredients): return [{'name': 'lemonjuice', 'qty': 5,'unit': 'tbsp', 'comment': 'fresh', 'fulltext': item} for item in ingredients] # This is not really a fix obviously, need to write this still :-) # TODO fixed_ingredients['ingredients'] = [fix_ingredient(row) for row in df_total['ingredients']] fixed_ingredients blob_client = container_client.get_blob_client("total.csv") blob_client.upload_blob(fixed_ingredients.to_csv(), blob_type="BlockBlob", overwrite=True) blob_client = container_client.get_blob_client("total.json") blob_client.upload_blob(fixed_ingredients.to_json(orient='records'), blob_type="BlockBlob", overwrite=True) ###Output _____no_output_____ ###Markdown Time to explore ###Code fixed_ingredients[fixed_ingredients["title"]=="Pico de Gallo"]["ingredients"] ###Output _____no_output_____
PreviousDevelopment/endstops.ipynb
###Markdown Endstop Tests Object- Figure out endstops. Code: ###Code cnc = GRBL.GRBL(port="/dev/cnc_3018") cnc.status cnc.cmd("$$") cnc.cmd("$22=1") cnc.cmd("$H") ###Output _____no_output_____
Deep Learning/keras_introduction.ipynb
###Markdown Introducción a redes neuronales con KerasLas redes neuronales constituyen uno de los modelos más interesantes y complejos dentro de Machine Learning, pueden ser utilizados en tareas tanto de clasificación, como de regresión, su unidad básica es la **neurona** o simplemente **unidad**, de la cual combinando muchas neuronas entre sí, se obtiene una red que es capaz de resolver problemas bastante complejos, es un modelo inspirado en como funcionan las neuronas bilógicas de nuestro cerebro, a continuación describiremos en que consiste una neurona, que es la unidad básica de cómputo dentro de una red neuronal. Modelo de una neurona Una neurona consiste en una unidad computacional de $n$ variables de entrada y una salida $y$, además de tener $n$ entradas, también se le agrega una entrada adicional $b$ llamada **bias** el cual es una constante. La gracia de una neurona es que al igual que un modelo de **regresión lineal**, todas las entradas son sumadas de manera ponderada, lo que significa que a cada una de las variables se les multiplica por un parámetro $w$ llamado **peso**, el resultado de dicha suma ponderada la llamaremos $z_l$. De por momento no existe diferencia alguna entre una neurona y un modelo de regresión lineal ya que:$$ z_l = \sum_{j=1}^{n}w_jx_j + b $$Pero, la gran diferencia entre una red neuronal y un modelo de regresión lineal es que en vez de utilizar la salida de la suma ponderada, dicha suma es pasada por una **función de activación**, la cual la representaremos con la letra $\sigma$, por ende el modelo completo de una neurona queda determinado por las siguientes ecuaciones.$$y = \sigma(z_l)$$$$z_l = \sum_{j=1}^{n}w_jx_j + b $$ La función de activación puede tomar muchas formas, pero las más comunes son: Función SigmoideEs la misma función utilizada en regresión logística, se define como$$\sigma(z_l) = \frac{1}{1+e^{-z_l}}$$ ###Code z_l = np.linspace(-8, 8, 100) sigma_zl = 1 / (1 + np.exp(-z_l)) plt.figure(figsize=(10,5)) plt.plot(z_l, sigma_zl, linewidth=3, c="red") plt.grid(True) plt.title("Sigmoide") plt.xlabel("$z_l$") plt.ylabel("$\sigma(z_l)$") plt.show() ###Output _____no_output_____ ###Markdown Tangente HiperbólicaEs una función similar a la función sigmoide con la diferencia de que va dentro del rango de -1 a 1, en vez de ir de 0 a 1, se define como:$$ tanh(z_l) = \frac{e^{z_l} - e^{-z_l}}{e^{z_l} + e^{-z_l}} $$ ###Code tanh_zl = np.tanh(z_l) plt.figure(figsize=(10,5)) plt.plot(z_l, tanh_zl, linewidth=3, c="blue") plt.grid(True) plt.title("Tanh") plt.xlabel("$z_l$") plt.ylabel("$\sigma(z_l)$") plt.show() ###Output _____no_output_____ ###Markdown ReLu**ReLu** viene de **unidad lineal rectificada**,es la función más utilizada para entrenar redes neuronales debido a su simplicidad y también debido a que empírica y teoricamente muestra mayores tasas de convergencia, en comparación a tanh y sigmoide, más adelante veremos el porqué. Dicha función se define como: $$ReLu(z_l) = max(0, z_l)$$ ###Code relu_zl = np.maximum(z_l, 0) plt.figure(figsize=(10,5)) plt.plot(z_l, relu_zl, linewidth=3, c="grey") plt.grid(True) plt.title("ReLu") plt.xlabel("$z_l$") plt.ylabel("$\sigma(z_l)$") plt.show() ###Output _____no_output_____ ###Markdown EscalónEs la función de activación más antigua, utilizada en las primeras investigaciones sobre redes neuronales, hoy en día se utilizada solamente para fines pedagógicos debido a su simpleza de entendimiento. Se define como:$$\sigma(z_l)= \left\{ \begin{array}{lcc} 0 & si & z_l < 0 \\ 1 & si & z_l \geq 0 \end{array} \right.$$ ###Code step_zl = np.heaviside(z_l, 0.5) plt.figure(figsize=(10,5)) plt.plot(z_l, step_zl, linewidth=3, c="black") plt.grid(True) plt.title("Escalón") plt.xlabel("$z_l$") plt.ylabel("$\sigma(z_l)$") plt.show() ###Output _____no_output_____ ###Markdown En principio podría parecer un poco misterioso el porqué se le aplica una función de activación a una neurona, pero a medida que avanzemos, todo quedará mucho más claro. Red neuronalEn primera instancia podría parecer que una sola neurona por si solo no es de mucha utilidad, de hecho, si utilizamos la función sigmoide como activación, terminaría siendo un modelo de regresión logística practicamente. Para entender la función de las neuronas y de la función de activación, intentemos resolver el siguiente problema: ###Code X1, y1 = ([0, 1], [0, 1]) X2, y2 = ([0, 1], [1, 0]) x = np.linspace(-0.5, 1.5, 100).reshape(-1, 1) y = np.zeros((100, 1)) + 0.5 plt.scatter(X1, y1, s=400, c='red') plt.scatter(X2, y2, s=400, c='blue') plt.plot(x,y, '--', linewidth=4) plt.grid(True) plt.xlim([-0.5, 1.5]) plt.ylim([-0.5, 1.5]) plt.show() ###Output _____no_output_____ ###Markdown Si desearamos clasificar los puntos que se muestran en la figura anterior utilizando una neurona, podriamos utilizar el escalón como función de activación, entonces si la salida de la neurona es 1, significa que es de la categoría azul y si es 0, sería de la categoría roja, entonces simplemente habría que ajustar los parámetros de la neurona hasta generar una combinación que pueda categorizar correctamente los puntos de la figura, sencillo no? El único problema que surge con el razonamiento anterior, es que ajustar los parámetros de la neurona significa encontrar una recta que sea capaz de separar todos los puntos en dos categorías de manera correcta, pero si observamos la recta punteada de la figura anterior, pareciera de que no existe manera alguna de separar los puntos rojos de los puntos azules utilizando sólo una recta, entonces, que se puede hacer al respecto? Primera Arquitectura NeuronalPara solucionar el problema, necesitaremos simplemente usar más neuronas! en la imagen a continuación se muestra una red neuronal compuesto por 3 neuronas, dos entradas y una salida. Uno se preguntará como esto nos puede ayudar y de que se diferencia del caso anterior? Pues, resulta que tenemos muchos más parámetros con qué trabajar! De hecho tenemos exactamente 9 parámetros que podemos controlar, 6 pesos y 3 bias. Teniendo ahora 3 neuronas, podemos enfocarnos en que las primeras dos se **especializen** en separar los puntos utilizando una línea recta cada una, mientras que la última neurona se puede enfocar en tomar la desición de la categoría a la cual el punto corresponde, la imagen a continuación muestra un escenario ideal, que muestra la separación que las primeras dos neuronas realizan. ###Code x_ = np.linspace(-0.5, 1.5, 100) y_1_ = x - 0.5 y_2_ = x + 0.5 plt.scatter(X1, y1, s=400, c='red') plt.scatter(X2, y2, s=400, c='blue') plt.plot(x_,y_1_, '--', linewidth=3) plt.plot(x_,y_2_, '--', linewidth=3) plt.grid(True) plt.xlim([-0.5, 1.5]) plt.ylim([-0.5, 1.5]) plt.show() ###Output _____no_output_____ ###Markdown Ahora simplemente la última neurona tiene que definir si el punto se encuentra entre las dos rectas, o se encuentra fuera de estas para realizar su desición de clasificación. Con este análisis podemos concluir dos puntos muy relevantes:- La función de activación es escencial en el sentido de que permite **distorcionar** la salida de tal manera de que sea posible realizar tareas de clasificación, en otras palabras, **permite el aprendizaje** de la neurona.- La **especialización** es la base que permite que redes neuronales complejas, puedan solucionar problemas complejos, en donde la primera capa de neuronas resuelven problemas bastante sencillos, mientras que las últimas capas realizan clasificaciones sofisticadas. A esto se le llama **deep learning**. > El problema expuesto en esta sección se conoce como el problema **XOR** el cual fue uno de los primeros problemas investigados para determinar el potencial de las redes neuronales, sin la función de activación, la red no sería capaz de resolver dicho problema debido a un simple teorema que plantea que la composición de varias **transformaciones lineales** es simplemente una **transformación lineal**, por lo que se hace necesario agregar una deformación **no lineal**. Arquitectura generalA continuación formalizaremos lo que hemos discutido sobre redes neuronales, en la práctica existen diversas arquitecturas posibles, por lo que procederemos a mostrar el esquema general de una red introduciendo conceptos importantes, en la figura a continuación se puede apreciar dicha arquitectura. - La **capa de entrada** consiste en las variables o características que serán alimentadas a la red neuronal, por ejemplo si queremos clasificar el precio de una casa como "*costoso*" o "*barato*", las características de entrada podrían ser el *precio*, *numero de habitaciones*, *metros cuadrados*, etc.- Las **capas ocultas** son todas aquellas que no están a la "*vista*", es decir, no generan un resultado final, pero sus cómputos si afectan a las variables de salida y son estimuladas por la capa de entrada o capas ocultas anteriores.- La **capa de salida** es la última capa de una red neuronal y es la capa que expone el resultado final de la red, una vez realizado todos los cálculos. Notación y ecuaciones generalesAntes de continuar, es importante hacer hincapié en la notación utilizada a lo largo de este artículo, ya que con tantas neuronas y tantas conexiones, se puede volver confuso, pero con el tiempo la notación se irá volviendo mucho más familiar.- $L$: Corresponde a la última capa de la red neuronal.- $l$: Hace referencia a alguna capa de la red.- $a_{i}^{l}$: Corresponde a la activación de la neurona $i$, en la capa $l$.- $w_{jk}^{l}$: Corresponde al **peso** de la conexión entre la neurona $k$ de la capa $l-1$, a la neurona $j$ de la capa $l$.- $b_{j}^{l}$: Corresponde al **bias** de la neurona $j$ en la capa $l$.- $\sigma$: Corresponde a la función de activación.- $z_{j}^{l}$: Suma ponderada neurona $j$, capa $l$.Con la notación expuesta, tendriamos que la activación de cada neurona viene dado por la siguiente ecuación:$$a_{i}^{l} = \sigma(z_{j}^{l})$$$$z_j^l = \sum_{k}w_{jk}^{l}a_k^{l-1} + b_j^l$$Podemos ver que son muchos índices revueltos por todas partes, podemos facilitar bastante la notación si es que definimos los siguientes vectores y matrices:- $\boldsymbol{a^l} = [a_1^l, a_2^l, \dots,a_j^l]^T$: Corresponde a un vector que contiene la activación de todas las neuronas de la capa $l$.- $\boldsymbol{b^l} = [b_1^l, b_2^l, \dots,b_j^l]^T$: Corresponde a un vector que contiene todos los **bias** de cada neurona de la capa $l$.- $\boldsymbol{z^l} = [z_1^l, z_2^l, \dots,z_j^l]^T$: Corresponde a un vector que contiene todas las sumas ponderadas de cada neurona de la capa $l$.Para condensar todos los pesos de una capa, podemos definir la siguiente **matriz de pesos**, donde cada fila representa las conexiones de las neuronas de la capa anterior, hacia una neurona de la capa *l*.$$ \boldsymbol{W^l} = \begin{bmatrix}w_{11}^l & w_{12}^l & \dots & w_{1k}^l\\w_{21}^l & w_{22}^l & \dots & w_{2k}^l\\\vdots & \vdots & \ddots & \vdots \\w_{j1}^l & w_{j2}^l & \dots & w_{jk}^l\end{bmatrix}$$Definido esto, podemos representar las ecuaciones anteriores de la siguiente forma, el cual corresponde al cálculo de la activación de la capa $l$, en función de la capa anterior:$$\boldsymbol{a^l} = \sigma(\boldsymbol{z^l})$$$$\boldsymbol{z^l} = \boldsymbol{W^l}\boldsymbol{a^{l-1}} + \boldsymbol{b^l}$$A la ecuación anterior, se le llama **Ecuación de Feedforward**. Entrenando una red neuronalYa tenemos a disposición un modelo completo de una red neuronal, más en específico se le llama **red neuronal secuencial totalmente conectado**, el problema es que para que nos sea de utilidad, tenemos que poder ajustar los parámetros del modelo de alguna forma, y para eso necesitamos de una función de coste el cual nos permitirá medir el rendimiento del modelo y un algortimo que permita corregir dichos parámetros en base al error. Supondremos que se tiene un **dataset** con $n$ etiquetas que serán utilizadas para calcular el error, cada etiqueta es un vector del mismo tamaño que la capa de salida de la red. Error cuadrático medioSi queremos utilizar nuestra red para predecir valores en un rango continuo, el error cuadrático medio es la métrica por excelencia para dichos problemas, para nuestra red neuronal, se define de la siguiente manera, escrito en notación normal y matricial:$$ MSE(\boldsymbol{a^L}) = \frac{1}{2n}\sum_i \sum_j \left(a_j^L - a_j^{(i)}\right)^2 $$$$ MSE(\boldsymbol{a^L}) = \frac{1}{2n}\sum_i \left(\boldsymbol{a^L} - \boldsymbol{a^{(i)}}\right)^2 $$$a_j^{(i)}$ hace referencia a la categoría $j$ de la instancia $i$ del **dataset**. Nótese que el error es función únicamente de la activación de la capa de salida. Entropía cruzada y softmaxAl momento de clasificar, generalmente utilizaremos la entropía cruzada como **función de coste** debido a que toma en cuenta la cercanía entre una predicción y la categoría correcta, mientra más alejado se esté de la categoría correcta, mayor serpa el valor de la entropía cruzada, se define de la siguiente manera:$$ CE = -\sum_i \sum_j a_j^{(i)}log(a_j^L)$$$$ CE = -\sum_i \boldsymbol{a}^{(i)T} log(\boldsymbol{a^L})$$Es importante mencionar que para utilizar la entropía cruzada, se utiliza la función **softmax** como función de activación de la última capa. **Softmax** se caracteriza por ser una función el cual **comprime** los valores de entrada a un rango entre $[0, 1]$, con la importante característica de que la suma de los valores de salida es **siempre 1**, por lo tanto las salidas se pueden interpretar como probabilidades de pertenencia a dicha clase. La función **softmax** se define de la siguiente manera:$$a_j^L = \frac{e^{z_j^L}}{\sum_k e^{z_k^L}}$$ BackpropagationTenemos ya nuestro modelo y nuestra función de coste para medir el rendimiento de la red, pero como lo entrenamos? Podriamos manualmente manipular los parámetros hasta que el error llegue a un valor deseado, pero con eso no llegariamos a ninguna parte y tardariamos una eternidad, podriamos utilizar la misma estrategia con la cual entrenabamos nuestros modelos de regresión lineal y regresión logística, utilizando el **descenso del gradiente**, tendriamos que simplemente para cada iteración, calcular el error, luego el gradiente del error y actualizar los parámetros, pero hay un único problema. ¿Cómo calculamos el gradiente del error de la red? Si bien depende de la capa de salida, está depende de la capa anterior, que a su vez depende de la capa anterior y así sucesivamente. Si tenemos por ejemplo una red neuronal de 4 entradas, 4 capas ocultas con 4 neuronas cada capa y 4 neuronas de salida, tendriamos un total de $5 \times (4\times 4 + 4) = 100$ parámetros del cual el error depende! Definitivamente no es un problema sencillo, es por esto que en 1986 Rumelhart, Hinton y Williams introdujeron formalmente a través de un famoso paper el algoritmo de **backpropagation**. Para entender la idea detrás de **backpropagation** hay que definir una variable fundamental, que además de ser utilizada para el algoritmo, nos da una intuición de como se comporta la red a lo largo del tiempo, a continuación procedemos a definir lo que se conoce como el **error de una neurona**:$$ \delta_j^l = \frac{\partial a_j^l}{\partial z_j^l}$$En otras palabras, el error de la neurona $j$ de la capa $l$, viene dado por la derivada parcial de la activación de dicha neurona con respecto a su suma ponderada. Es interesante observar que si el error de una neurona es grande, al modificar una de sus entradas, será más propenso a modificar de gran manera el error de la red. El objetivo del algoritmo es utilizar el error de cada neurona para calcular de manera sencilla el gradiente de la red y lo hace siguiendo los pasos descritos a continuación:1. **Feedforward**: Alimenta a la red una o muchas instancias del **dataset** y calcula la salida para cada instancia utilizando la **ecuación de feedforward**.2. **Cálculo del error**: Calcula el error de cada neurona de la última capa de la red.3. **Backpropagation**: A partir del error de cada neurona de la última capa, calcula el error de la capa anterior, luego de la capa anterior a esa y así sucesivamente, hasta llegar a la primera capa de la red, he ahí el nombre del algoritmo.4. **Calculo del gradiente**: Teniendo el error de todas neuronas, calcula el gradiente de la función de coste.5. **Actualización de parámetros**: Teniendo el gradiente de la función de coste, actualiza los parámetros del modelo realizando una iteración del descenso del gradiente.6. **Repetición**: Repite los pasos anteriores hasta converger a un valor determinado.De los pasos descritos, surguen tres interrogantes, ¿Cómo se calcula el error de la última capa?, ¿Cómo se propaga hacia atras el error? y, ¿Cómo se calcula el gradiente utilizando los errores? Todas estas preguntas se responden gracias a las **ecuaciones de backpropagation**, pero antes de mostrarlas, definiremos un par de notaciones relevantes. > **Backpropagation** es un algoritmo computacionalmente costoso, pero es posible de realizar con el hardware existente, cosa que no era posible antes de la invención de este algoritmo, el cual luego de su salida, potenció enormemente el desarrollo en el área de deep learning. Definiremos el vector $\boldsymbol{\delta^l}= [\delta_1^l, \delta_2^l, \dots, \delta_j^l]^T$ como el vector de **errores** de la capa $l$, $\nabla_{a^L}C = [\frac{\partial C}{\partial a_1^L}, \frac{\partial C}{\partial a_2^L}, \dots, \frac{\partial C}{\partial a_j^L}]^T$ corresponde al gradiente de la función de coste con respecto a la activación de la última capa y además definiremos el operador $\odot$, como el producto elemento a elemento entre dos vectores, es decir, si tenemos dos vectores $\boldsymbol{a}$ y $\boldsymbol{b}$, entonces la multiplicación punto a punto se define como:$$ \boldsymbol{a}\odot \boldsymbol{b} = \begin{bmatrix}a_1 \\ a_2 \\ \vdots \\ a_j\end{bmatrix} \odot \begin{bmatrix}b_1 \\ b_2 \\ \vdots \\ b_j\end{bmatrix} = \begin{bmatrix}a_1b_1 \\ a_2b_2 \\ \vdots \\ a_jb_j\end{bmatrix} $$Ya teniendo estas definiciones disponibles, tenemos finalmente que las **ecuaciones de backpropagation** son las siguientes:$$ \boldsymbol{\delta^L} = \nabla_{a^L} C \odot \dot{\sigma}(\boldsymbol{z^L}) $$$$ \boldsymbol{\delta^l} = \left[ \left(\boldsymbol{W^{l+1}}\right)^T \boldsymbol{\delta^{l+1}} \right] \odot \dot{\sigma}(\boldsymbol{z^l}) $$$$ \frac{\partial C}{\partial b_j^l} = \delta_j^l $$$$ \frac{\partial C}{\partial w_{jk}^l} = \delta_j^l a_k^{l-1} $$ Antes de finalizar nuestra discusión sobre backpropagation, es importante concluir los siguientes puntos:- Teniendo el error de cada neurona, el calculo del gradiente es inmediato, ya que se compone de valores calculados previamente.- El error de cada neurona **depende directamente** de la derivada de la función de activación. Velocidad de aprendizajeNótese como al final de la sección anterior mencionamos que el error de cada neurona es **directamente proporcional** a la derivada de la función de activación, esto implica que el gradiente de la función de coste con respecto a los parámetros también lo será, aquello tiene implicaciones importantes al momento de escoger la función de activación, para ilustrarlo mostraremos a continuación la gráfica de la derivada de las funciones de activación más utilizadas: ###Code from scipy.misc import derivative def sigmoid(x): return 1 / (1 + np.exp(-x)) def relu(x): return np.maximum(0, x) def step(x): return np.heaviside(x, 0.5) def plot_function_and_derivative( ax, function, x, func_name, xlabel, ylabel, title): ax.plot(x,function(x), c='blue', label=func_name, linewidth=3) ax.plot(x,derivative(function,x), c='red', label='Derivada', linewidth=3) ax.set_title(title, fontsize=20) ax.set_xlabel("x", fontsize=20) ax.set_ylabel(ylabel, fontsize=20) ax.legend() ax.grid(True) x = np.linspace(-6, 6, 1000) fig, ax = plt.subplots(2, 2, figsize=(15,10)) plot_function_and_derivative( ax[0,0], sigmoid, x, "Sigmoid", "x", "$\sigma(x)$", "Sigmoid y derivada") plot_function_and_derivative( ax[0,1], relu, x, "ReLu", "x", "$ReLu(x)$", "ReLu y derivada") plot_function_and_derivative( ax[1,0], np.tanh, x, "Tanh", "x", "$Tanh(x)$", "Tanh y derivada") plot_function_and_derivative( ax[1,1], step, x, "Escalón", "x", "$h(x)$", "Escalón y derivada") ###Output _____no_output_____ ###Markdown Nótese que tanto la función **escalón**, como **ReLu** no son diferenciables en 0, por ende los valores observados alrededor de 0 es como **scipy** maneja dicha indefinición. Lo importante a destacar es lo que sucede cuando la función se acerca a 1, en el caso de la función **sigmoide** y **tanh** la derivada tiende a 0, esto conlleva a la siguiente aseveración:- Utilizando la función **sigmoide** o **tanh**, mientras más saturado se encuentre una neurona, menor será su velocidad de aprendizaje, debido a que la neurona será incapaz de cambiar significativamente el error de la red.Por otra si observamos la función **ReLu**, podemos concluir lo siguiente:- La velocidad de aprendizaje de una red neuronal utilizando **ReLu** como función de activación, es independiente del nivel de saturación de la neurona, esto se debe a que la derivada de dicha función es constante en todo el rango positivo de la neurona.Por último, tenemos que:- La función **escalón** **no sirve** para el aprendizaje, esto se debe a que su derivada es 0 en todo el rango de la función (exceptuando el 0). Introducción a KerasA sido un largo camino y con ello hemos visto bastante teoría, pero ya es momento de poner todo en práctica y lo haremos intentando solucionar el siguiente problema:Se requiere construir un modelo capaz de clasificar prendas de vestir, utilizando imágenes de vestimenta extraidas de una base de datos llamada Fashion MNIST, se espera obteneruna presición mayor al 90% utilizando una red neuronal secuencial.Para atacar el problema utilizaremos **Keras** el cual es un **API** de alto nivel creado por *François Chollet* y mantenido por *Google*, permite diseñar, implementar y entrenar modelos de redes neuronales de manera sencilla e intuitiva y utiliza como motor computacional **Tensorflow 2**, es una **API** inspirada en la famosa libreria **Scikit Learn**, para utilizarla simplemente importamos la librería como se muestra a continuación. ###Code import tensorflow as tf import random from tensorflow import keras ###Output _____no_output_____ ###Markdown Exploración de datosComo utilizaremos la base de datos **Fashion MNIST**, lo primero que haremos es cargarla, esto es posible realizarlo mediante Keras. ###Code fashion_mnist = keras.datasets.fashion_mnist (X_train_full, y_train_full), (X_test, y_test) = fashion_mnist.load_data() print(X_train_full.shape) print(X_train_full.dtype) print(y_train_full.shape) print(y_train_full.dtype) ###Output (60000, 28, 28) uint8 (60000,) uint8 ###Markdown > Se llama **Fashion MNIST** debido a que es una adaptación de la base de datos **MNIST**, tiene exactamente la misma estructura en el sentido de que son imágenes de 28x28 pixeles en escala de grises, cada pixel teniendo un rango de 0 a 255. Podemos ver que la base de datos ya viene separada en un conjunto de entrenamiento y un conjunto de prueba, si queremos visualizar una instancia de nuestro dataset, podemos ejecutar el siguiente código: ###Code idx = random.randint(0, 59999) instance = X_train_full[idx, :, :] plt.imshow(instance, cmap='gray_r') ###Output _____no_output_____ ###Markdown Si observamos la estructura de las etiquetas, podemos observar que corresponden a valores numéricos sin signo, donde cada valor corresponde a un tipo de prenda, por lo que si queremos tener una representación legible, deberemos generar una lista con los nombres de cada categoría. ###Code class_names = ["Polera", "Pantalón", "Polerón", "Vestido", "Abrigo", "Sandalia", "Camisa", "Zapatillas", "Mochila/Cartera", "Taco"] class_names[y_train_full[idx]] ###Output _____no_output_____ ###Markdown Para poder crear y entrenar nuestra red neuronal, necesitamos adecuar nuestro dataset para que el entrenamiento sea óptimo, por lo que procederemos a normalizar los valores de cada imagen a un rango entre 0 y 1, además, crearemos un grupo de validación, que utilizaremos durante el proceso de entrenamiento. ###Code X_valid, X_train = X_train_full[:5000] / 255.0, X_train_full[5000:] / 255.0 y_valid, y_train = y_train_full[:5000] / 255.0, y_train_full[5000:] / 255.0 ###Output _____no_output_____ ###Markdown Creación de red neuronalTeniendo ya nuestros datos listos para el entrenamiento, utilizaremos Keras para construir nuestro modelo de red neuronal, la API es intuitiva de utilizar y nos permite tener mucha flexibilidad al momento de crear nuestros modelos. ###Code model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[28,28])) model.add(keras.layers.Dense(300, activation="relu")) model.add(keras.layers.Dense(100, activation="relu")) model.add(keras.layers.Dense(10, activation="softmax")) ###Output _____no_output_____ ###Markdown Lo primero que hacemos es crear un modelo secuencial, lo cual nos permite ir agregando capas una tras otra, en donde la salida de una capa se alimenta a la entrada de la capa siguiente. La primera capa corresponde a una capa de preprocesamiento, en donde ajusta nuestro arreglo de 28x28 pixeles a un arreglo plano de 1x784, el cual será alimentado a la siguiente capa. Las capas posteriores corresponden a **capas densas**, la cual ya exploramos en las secciones anteriores, por lo que nuestra red neuronal corresponde a una **red secuencial totalmente conectada** (o red MLP). Cabe destacar que se utiliza la función de activación ReLu en todas las capas, exceptuando la última en donde se utiliza la función softmax, cosa de interpretar la salida de la red como la probabilidad de que la entrada corresponda a una categoría en particular, también hay que destacar que la última capa contiene sólamente 10 neuronas, ya que queremos clasificar 10 posibles tipos de prendas.Utilizando el método `summary`, podemos ver un resumen de nuestra red neuronal, donde se muestran la capaz creadas, el nombre de cada capa, la forma que tiene cada capa y la cantidad de parámetros entrenables y no entrenables. Cabe notar que el modelo tiene muchos parámetros (266.610 parámetros!) lo cual le da al modelo mucha flexibilidad de entrenamiento, pero al mismo tiempo corre el riesgo de hacer overfitting (junto a otros problemas que se verán más adelante). ###Code model.summary() ###Output Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= flatten_1 (Flatten) (None, 784) 0 dense_3 (Dense) (None, 300) 235500 dense_4 (Dense) (None, 100) 30100 dense_5 (Dense) (None, 10) 1010 ================================================================= Total params: 266,610 Trainable params: 266,610 Non-trainable params: 0 _________________________________________________________________ ###Markdown También podemos acceder a todas las capas y a todos los parámetros utilizando nuestro modelo: ###Code print(model.layers) hidden1 = model.layers[1] weights, biases = hidden1.get_weights() print(weights.shape) print(weights) print(biases) ###Output (784, 300) [[-3.6438253e-02 4.9555875e-02 6.1383143e-02 ... -6.0707431e-02 4.2405926e-02 -3.2193959e-05] [ 1.7555967e-02 9.4652921e-03 2.4680980e-02 ... 7.3595569e-03 7.1548536e-02 3.7059963e-02] [-3.7807863e-02 -1.8820234e-02 -1.3896190e-02 ... 9.0565607e-03 4.6632037e-02 5.0151654e-02] ... [ 4.1010134e-02 -4.9826108e-02 -4.2498052e-02 ... 2.2034653e-02 4.9556337e-02 4.5869827e-02] [ 4.0222093e-02 -3.8377851e-02 7.8043193e-03 ... 4.0970743e-03 4.6357721e-02 2.9997371e-02] [-5.1667728e-02 -4.5209479e-02 -5.0317287e-02 ... -1.6803406e-02 7.0411950e-02 4.7724828e-02]] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] ###Markdown Entrenamiento de red neuronalUna vez creado nuestro modelo, antes de realizar el entrenamiento, debemos primero compilarlo, en este proceso es donde especificamos la función de costo, el optimizador a utilizar y podemos además especificar una lista de métricas a calcular durante el entrenamiento, nótese que dichas métricas son diferentes a la función de costo, el cual este último tiene la función de optimizar los parámetros de la red, mientras que las métricas nos dan indicios sobre el desempeño en general de la red. ###Code model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.SGD(learning_rate=0.5), metrics=["accuracy"]) ###Output _____no_output_____ ###Markdown Ahora estamos listos para entrenar el modelo, para esto, al igual que como se entrena un modelo en la libreria de **scikit-learn**, utilizamos el método `fit` de nuestro modelo. El método tiene como parámetros el dataset de entrenamiento, seguido de las etiquetas, además, podemos especificar el número de **epochs** que corresponde a la cantidad de veces el cual la red neuronal pasa por el dataset completo, finalmente, pasamos el dataset de validación que creamos anteriormente. ###Code model.fit(X_train, y_train, epochs=30, validation_data=(X_valid, y_valid)) ###Output Epoch 1/30 1719/1719 [==============================] - 6s 3ms/step - loss: 1.1298e-05 - accuracy: 0.1008 - val_loss: 7.9204e-06 - val_accuracy: 0.0914 Epoch 2/30 1719/1719 [==============================] - 6s 3ms/step - loss: 6.1821e-06 - accuracy: 0.1008 - val_loss: 5.0425e-06 - val_accuracy: 0.0914 Epoch 3/30 1719/1719 [==============================] - 6s 3ms/step - loss: 4.2365e-06 - accuracy: 0.1008 - val_loss: 3.6619e-06 - val_accuracy: 0.0914 Epoch 4/30 1719/1719 [==============================] - 6s 3ms/step - loss: 3.2112e-06 - accuracy: 0.1008 - val_loss: 2.8616e-06 - val_accuracy: 0.0914 Epoch 5/30 1719/1719 [==============================] - 6s 3ms/step - loss: 2.5785e-06 - accuracy: 0.1008 - val_loss: 2.3379e-06 - val_accuracy: 0.0914 Epoch 6/30 1719/1719 [==============================] - 6s 3ms/step - loss: 2.1481e-06 - accuracy: 0.1008 - val_loss: 1.9693e-06 - val_accuracy: 0.0914 Epoch 7/30 1719/1719 [==============================] - 6s 3ms/step - loss: 1.8382e-06 - accuracy: 0.1008 - val_loss: 1.6981e-06 - val_accuracy: 0.0914 Epoch 8/30 1719/1719 [==============================] - 6s 3ms/step - loss: 1.6046e-06 - accuracy: 0.1008 - val_loss: 1.4895e-06 - val_accuracy: 0.0914 Epoch 9/30 1719/1719 [==============================] - 6s 3ms/step - loss: 1.4220e-06 - accuracy: 0.1008 - val_loss: 1.3242e-06 - val_accuracy: 0.0914 Epoch 10/30 1719/1719 [==============================] - 6s 3ms/step - loss: 1.2757e-06 - accuracy: 0.1008 - val_loss: 1.1909e-06 - val_accuracy: 0.0914 Epoch 11/30 1719/1719 [==============================] - 6s 3ms/step - loss: 1.1560e-06 - accuracy: 0.1008 - val_loss: 1.0814e-06 - val_accuracy: 0.0914 Epoch 12/30 1719/1719 [==============================] - 6s 3ms/step - loss: 1.0564e-06 - accuracy: 0.1008 - val_loss: 9.8896e-07 - val_accuracy: 0.0914 Epoch 13/30 1719/1719 [==============================] - 6s 3ms/step - loss: 9.7192e-07 - accuracy: 0.1008 - val_loss: 9.1049e-07 - val_accuracy: 0.0914 Epoch 14/30 1719/1719 [==============================] - 6s 3ms/step - loss: 8.9976e-07 - accuracy: 0.1008 - val_loss: 8.4248e-07 - val_accuracy: 0.0914 Epoch 15/30 1719/1719 [==============================] - 6s 3ms/step - loss: 8.3711e-07 - accuracy: 0.1008 - val_loss: 7.8405e-07 - val_accuracy: 0.0914 Epoch 16/30 1719/1719 [==============================] - 6s 3ms/step - loss: 7.8236e-07 - accuracy: 0.1008 - val_loss: 7.3262e-07 - val_accuracy: 0.0914 Epoch 17/30 1719/1719 [==============================] - 6s 3ms/step - loss: 7.3421e-07 - accuracy: 0.1008 - val_loss: 6.8729e-07 - val_accuracy: 0.0914 Epoch 18/30 1719/1719 [==============================] - 6s 3ms/step - loss: 6.9145e-07 - accuracy: 0.1008 - val_loss: 6.4715e-07 - val_accuracy: 0.0914 Epoch 19/30 1719/1719 [==============================] - 6s 3ms/step - loss: 6.5323e-07 - accuracy: 0.1008 - val_loss: 6.1112e-07 - val_accuracy: 0.0914 Epoch 20/30 1719/1719 [==============================] - 6s 3ms/step - loss: 6.1890e-07 - accuracy: 0.1008 - val_loss: 5.7854e-07 - val_accuracy: 0.0914 Epoch 21/30 1719/1719 [==============================] - 6s 3ms/step - loss: 5.8786e-07 - accuracy: 0.1008 - val_loss: 5.4915e-07 - val_accuracy: 0.0914 Epoch 22/30 1719/1719 [==============================] - 6s 3ms/step - loss: 5.5970e-07 - accuracy: 0.1008 - val_loss: 5.2243e-07 - val_accuracy: 0.0914 Epoch 23/30 1719/1719 [==============================] - 6s 3ms/step - loss: 5.3397e-07 - accuracy: 0.1008 - val_loss: 4.9814e-07 - val_accuracy: 0.0914 Epoch 24/30 1719/1719 [==============================] - 6s 3ms/step - loss: 5.1064e-07 - accuracy: 0.1008 - val_loss: 4.7612e-07 - val_accuracy: 0.0914 Epoch 25/30 1719/1719 [==============================] - 6s 3ms/step - loss: 4.8900e-07 - accuracy: 0.1008 - val_loss: 4.5550e-07 - val_accuracy: 0.0914 Epoch 26/30 1719/1719 [==============================] - 6s 3ms/step - loss: 4.6910e-07 - accuracy: 0.1008 - val_loss: 4.3677e-07 - val_accuracy: 0.0914 Epoch 27/30 1719/1719 [==============================] - 6s 3ms/step - loss: 4.5073e-07 - accuracy: 0.1008 - val_loss: 4.1925e-07 - val_accuracy: 0.0914 Epoch 28/30 1719/1719 [==============================] - 6s 3ms/step - loss: 4.3368e-07 - accuracy: 0.1008 - val_loss: 4.0340e-07 - val_accuracy: 0.0914 Epoch 29/30 1719/1719 [==============================] - 6s 3ms/step - loss: 4.1777e-07 - accuracy: 0.1008 - val_loss: 3.8810e-07 - val_accuracy: 0.0914 Epoch 30/30 1719/1719 [==============================] - 6s 3ms/step - loss: 4.0301e-07 - accuracy: 0.1008 - val_loss: 3.7415e-07 - val_accuracy: 0.0914
features/food/food.ipynb
###Markdown Food Disease dataset ###Code df_train = pd.read_csv("/Users/adamkovacs/data/food-disease-dataset/splits/cause_folds/fold0/train.csv", sep=",", quotechar='"') df_dev = pd.read_csv("/Users/adamkovacs/data/food-disease-dataset/splits/cause_folds/fold0/val.csv", sep=",", quotechar='"') import re def extract_entities(df): sen = re.sub(re.escape(df.term1), 'XXX', df.sentence, flags=re.IGNORECASE) sen = re.sub(re.escape(df.term2), 'YYY', sen, flags=re.IGNORECASE) return sen.encode('ascii', errors='ignore').decode('utf-8') df_train['preprocessed_sen'] = df_train.apply(extract_entities, axis=1) df_train['treat_label'] = df_train.is_treat.replace({1: 'TREAT', 0: 'NOT'}) df_train['cause_label'] = df_train.is_cause.replace({1: 'CAUSE', 0: 'NOT'}) df_dev['preprocessed_sen'] = df_dev.apply(extract_entities, axis=1) df_dev['treat_label'] = df_dev.is_treat.replace({1: 'TREAT', 0: 'NOT'}) df_dev['cause_label'] = df_dev.is_cause.replace({1: 'CAUSE', 0: 'NOT'}) from xpotato.dataset.dataset import Dataset from xpotato.models.trainer import GraphTrainer ###Output _____no_output_____ ###Markdown Detecting treat ###Code train_rows = df_train.iterrows() dev_rows = df_dev.iterrows() train_sentences = [(row[1].preprocessed_sen, row[1].treat_label) for row in train_rows] dev_sentences = [(row[1].preprocessed_sen, row[1].treat_label) for row in dev_rows] train_dataset = Dataset(train_sentences, label_vocab={"TREAT":1, "NOT": 0}, lang='en_bio') train_dataset.set_graphs(train_dataset.parse_graphs(graph_format="ud")) dev_dataset = Dataset(dev_sentences, label_vocab={"TREAT":1, "NOT": 0}, lang='en_bio') dev_dataset.set_graphs(dev_dataset.parse_graphs(graph_format="ud")) train_df = train_dataset.to_dataframe() dev_df = dev_dataset.to_dataframe() from xpotato.dataset.utils import save_dataframe save_dataframe(train_df, 'food_train_dataset_treat_ud.tsv') save_dataframe(dev_df, 'food_dev_dataset_treat_ud.tsv') ###Output _____no_output_____ ###Markdown Detecting cause ###Code train_rows = df_train.iterrows() dev_rows = df_dev.iterrows() train_sentences = [(row[1].preprocessed_sen, row[1].cause_label) for row in train_rows] dev_sentences = [(row[1].preprocessed_sen, row[1].cause_label) for row in dev_rows] train_dataset_cause = Dataset(train_sentences, label_vocab={"CAUSE":1, "NOT": 0}) train_dataset_cause.set_graphs(train_dataset.graphs) dev_dataset_cause = Dataset(dev_sentences, label_vocab={"CAUSE":1, "NOT": 0}) dev_dataset_cause.set_graphs(dev_dataset.graphs) train_df = train_dataset.to_dataframe() dev_df = dev_dataset.to_dataframe() save_dataframe(train_df, 'food_train_dataset_cause_fourlang.tsv') save_dataframe(dev_df, 'food_dev_dataset_cause_fourlang.tsv') ###Output _____no_output_____
time_series/2-Acquire.ipynb
###Markdown 2. Acquire the Data Finding Data SourcesThere are three place to get onion price and quantity information by market. 1. **[Agmarket](http://agmarknet.nic.in/)** - This is the website run by the Directorate of Marketing & Inspection (DMI), Ministry of Agriculture, Government of India and provides daily price and arrival data for all agricultural commodities at national and state level. Unfortunately, the link to get Market-wise Daily Report for Specific Commodity (Onion for us) leads to a multipage aspx entry form to get data for each date. So it is like to require an involved scraper to get the data. Too much effort - Move on. Here is the best link to go to get what is available - http://agmarknet.nic.in/agnew/NationalBEnglish/SpecificCommodityWeeklyReport.aspx?ss=12. **[Data.gov.in](https://data.gov.in/)** - This is normally a good place to get government data in a machine readable form like csv or xml. The Variety-wise Daily Market Prices Data of Onion is available for each year as an XML but unfortunately it does not include quantity information that is needed. It would be good to have both price and quantity - so even though this is easy, lets see if we can get both from a different source. Here is the best link to go to get what is available - https://data.gov.in/catalog/variety-wise-daily-market-prices-data-onionweb_catalog_tabs_block_103. **[NHRDF](http://nhrdf.org/en-us/)** - This is the website of National Horticultural Research & Development Foundation and maintains a database on Market Arrivals and Price, Area and Production and Export Data for three commodities - Garlic, Onion and Potatoes. We are in luck! It also has data from 1996 onwards and has only got one form to fill to get the data in a tabular form. Further it also has production and export data. Excellent. Lets use this. Here is the best link to got to get all that is available - http://nhrdf.org/en-us/DatabaseReports Scraping the Data Ways to Scrape DataNow we can do this in two different levels of sophistication1. **Automate the form filling process**: The form on this page looks simple. But viewing source in the browser shows there form to fill with hidden fields and we will need to access it as a browser to get the session fields and then submit the form. This is a little bit more complicated than simple scraping a table on a webpage2. **Manually fill the form**: What if we manually fill the form with the desired form fields and then save the page as a html file. Then we can read this file and just scrape the table from it. Lets go with the simple way for now. Scraping - Manual Form FillingSo let us fill the form to get a small subset of data and test our scraping process. We will start by getting the [Monthwise Market Arrivals](http://nhrdf.org/en-us/MonthWiseMarketArrivals). - Crop Name: Onion- Month: January- Market: All- Year: 2016The saved webpage is available at [MonthWiseMarketArrivalsJan2016.html](MonthWiseMarketArrivalsJan2016.html) Understand the HTML StructureWe need to scrape data from this html page... So let us try to understand the structure of the page.1. You can view the source of the page - typically Right Click and View Source on any browser and that would give your the source HTML for any page.2. You can open the developer tools in your browser and investigate the structure as you mouse over the page 3. We can use a tools like [Selector Gadget](http://selectorgadget.com/) to understand the id's and classes' used in the web pageOur data is under the **&lt;table&gt;** tag Exercise 1 Find the number of tables in the HTML Structure of [MonthWiseMarketArrivalsJan2016.html](MonthWiseMarketArrivalsJan2016.html)? Find all the Tables ###Code # Import the library we need, which is Pandas import pandas as pd # Read all the tables from the html document AllTables = pd.read_html('MonthWiseMarketArrivalsJan2016.html') # Let us find out how many tables has it found? len(AllTables) ###Output _____no_output_____ ###Markdown Exercise 2Find the exact table of data we want in the list of AllTables? Get the exact tableTo read the exact table we need to pass in an identifier value which would identify the table. We can use the `attrs` parameter in read_html to do so. The parameter we will pass is the `id` variable ###Code # So can we read our exact table OneTable = pd.read_html('MonthWiseMarketArrivalsJan2016.html', attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'}) # So how many tables have we got now len(OneTable) # Show the table of data identifed by pandas with just the first five rows OneTable[0].head() ###Output _____no_output_____ ###Markdown However, we have not got the header correctly in our dataframe. Let us see if we can fix this.To get help on any function just use `??` before the function to help. Run this function and see what additional parameter you need to define to get the header correctly ###Code ??pd.read_html ###Output _____no_output_____ ###Markdown Exercise 3Read the html file again and ensure that the correct header is identifed by pandas? ###Code OneTable = pd.read_html('MonthWiseMarketArrivalsJan2016.html', header = 0, attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'}) ###Output _____no_output_____ ###Markdown Show the top five rows of the dataframe you have read to ensure the headers are now correct. ###Code OneTable[0].head() ###Output _____no_output_____ ###Markdown Dataframe Viewing ###Code # Let us store the dataframe in a df variable. You will see that as a very common convention in data science pandas use df = OneTable[0] # Shape of the dateset - number of rows & number of columns in the dataframe df.shape # Get the names of all the columns df.columns # Can we see sample rows - the top 5 rows df.head() # Can we see sample rows - the bottom 5 rows df.tail() # Can we access a specific columns df["Market"] # Using the dot notation df.Market # Selecting specific column and rows df[0:5]["Market"] # Works both ways df["Market"][0:5] #Getting unique values of State pd.unique(df['Market']) ###Output _____no_output_____ ###Markdown Downloading the Entire Month Wise Arrival Data ###Code AllTable = pd.read_html('MonthWiseMarketArrivals.html', header = 0, attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'}) AllTable[0].head() ??pd.DataFrame.to_csv AllTable[0].columns # Change the column names to simpler ones AllTable[0].columns = ['market', 'month', 'year', 'quantity', 'priceMin', 'priceMax', 'priceMod'] AllTable[0].head() # Save the dataframe to a csv file AllTable[0].to_csv('MonthWiseMarketArrivals.csv', index = False) ###Output _____no_output_____ ###Markdown 2. Acquire the Data Finding Data SourcesThere are three place to get onion price and quantity information by market. 1. **[Agmarket](http://agmarknet.nic.in/)** - This is the website run by the Directorate of Marketing & Inspection (DMI), Ministry of Agriculture, Government of India and provides daily price and arrival data for all agricultural commodities at national and state level. Unfortunately, the link to get Market-wise Daily Report for Specific Commodity (Onion for us) leads to a multipage aspx entry form to get data for each date. So it is like to require an involved scraper to get the data. Too much effort - Move on. Here is the best link to go to get what is available - http://agmarknet.nic.in/agnew/NationalBEnglish/SpecificCommodityWeeklyReport.aspx?ss=12. **[Data.gov.in](https://data.gov.in/)** - This is normally a good place to get government data in a machine readable form like csv or xml. The Variety-wise Daily Market Prices Data of Onion is available for each year as an XML but unfortunately it does not include quantity information that is needed. It would be good to have both price and quantity - so even though this is easy, lets see if we can get both from a different source. Here is the best link to go to get what is available - https://data.gov.in/catalog/variety-wise-daily-market-prices-data-onionweb_catalog_tabs_block_103. **[NHRDF](http://nhrdf.org/en-us/)** - This is the website of National Horticultural Research & Development Foundation and maintains a database on Market Arrivals and Price, Area and Production and Export Data for three commodities - Garlic, Onion and Potatoes. We are in luck! It also has data from 1996 onwards and has only got one form to fill to get the data in a tabular form. Further it also has production and export data. Excellent. Lets use this. Here is the best link to got to get all that is available - http://nhrdf.org/en-us/DatabaseReports Scraping the Data Ways to Scrape DataNow we can do this in two different levels of sophistication1. **Automate the form filling process**: The form on this page looks simple. But viewing source in the browser shows there form to fill with hidden fields and we will need to access it as a browser to get the session fields and then submit the form. This is a little bit more complicated than simple scraping a table on a webpage2. **Manually fill the form**: What if we manually fill the form with the desired form fields and then save the page as a html file. Then we can read this file and just scrape the table from it. Lets go with the simple way for now. Scraping - Manual Form FillingSo let us fill the form to get a small subset of data and test our scraping process. We will start by getting the [Monthwise Market Arrivals](http://nhrdf.org/en-us/MonthWiseMarketArrivals). - Crop Name: Onion- Month: January- Market: All- Year: 2016The saved webpage is available at [MonthWiseMarketArrivalsJan2016.html](MonthWiseMarketArrivalsJan2016.html) Understand the HTML StructureWe need to scrape data from this html page... So let us try to understand the structure of the page.1. You can view the source of the page - typically Right Click and View Source on any browser and that would give your the source HTML for any page.2. You can open the developer tools in your browser and investigate the structure as you mouse over the page 3. We can use a tools like [Selector Gadget](http://selectorgadget.com/) to understand the id's and classes' used in the web pageOur data is under the **&lt;table&gt;** tag Exercise 1 Find the number of tables in the HTML Structure of [MonthWiseMarketArrivalsJan2016.html](MonthWiseMarketArrivalsJan2016.html)? Find all the Tables ###Code # Import the library we need, which is Pandas import pandas as pd # Read all the tables from the html document AllTables = pd.read_html('MonthWiseMarketArrivalsJan2016.html') # Let us find out how many tables has it found? len(AllTables) type(AllTables) ###Output _____no_output_____ ###Markdown Exercise 2Find the exact table of data we want in the list of AllTables? ###Code AllTables[4] ###Output _____no_output_____ ###Markdown Get the exact tableTo read the exact table we need to pass in an identifier value which would identify the table. We can use the `attrs` parameter in read_html to do so. The parameter we will pass is the `id` variable ###Code # So can we read our exact table OneTable = pd.read_html('MonthWiseMarketArrivalsJan2016.html', attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'}) # So how many tables have we got now len(OneTable) # Show the table of data identifed by pandas with just the first five rows OneTable[0].head() ###Output _____no_output_____ ###Markdown However, we have not got the header correctly in our dataframe. Let us see if we can fix this.To get help on any function just use `??` before the function to help. Run this function and see what additional parameter you need to define to get the header correctly ###Code ??pd.read_html ###Output _____no_output_____ ###Markdown Exercise 3Read the html file again and ensure that the correct header is identifed by pandas? ###Code OneTable = pd.read_html('MonthWiseMarketArrivalsJan2016.html', header = 0, attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'}) ###Output _____no_output_____ ###Markdown Show the top five rows of the dataframe you have read to ensure the headers are now correct. ###Code OneTable[0].head() ###Output _____no_output_____ ###Markdown Dataframe Viewing ###Code # Let us store the dataframe in a df variable. You will see that as a very common convention in data science pandas use df = OneTable[0] # Shape of the dateset - number of rows & number of columns in the dataframe df.shape # Get the names of all the columns df.columns # Can we see sample rows - the top 5 rows df.head() # Can we see sample rows - the bottom 5 rows df.tail() # Can we access a specific columns df["Market"] # Using the dot notation df.Market # Selecting specific column and rows df[0:5]["Market"] # Works both ways df["Market"][0:5] #Getting unique values of State pd.unique(df['Market']) ###Output _____no_output_____ ###Markdown Downloading the Entire Month Wise Arrival Data ###Code AllTable = pd.read_html('MonthWiseMarketArrivals.html', header = 0, attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'}) AllTable[0].head() ??pd.DataFrame.to_csv AllTable[0].columns # Change the column names to simpler ones AllTable[0].columns = ['market', 'month', 'year', 'quantity', 'priceMin', 'priceMax', 'priceMod'] AllTable[0].head() # Save the dataframe to a csv file AllTable[0].to_csv('MonthWiseMarketArrivals.csv', index = False) ###Output _____no_output_____
tests/base/ipynb/exercise_list_all.ipynb
###Markdown Exercise List (all)**Exercise 1**This is a new exercise. It should be repeated again in the list below, in addition to exercises from other files**Exercise 1**This is a new exercise. It should be repeated again in the list below, in addition to exercises from other files([*back to text*](exercise-0))**Exercise 1 (exercise_list_labels)**This is an exercise from the `exercise_list_labels` file([*back to text*](exercise_list_labels.ipynbexercise-0))**Exercise 1 (exercises)**This is a note that has some _italic_ and **bold** embedded- list - in - exercise ```pythondef foobar(x, y, z): print(x, y, z)``` And text after the code blockbelow is something that should be a real code block ###Code def foobar(x, y, z): print(x, y, z) ###Output _____no_output_____ ###Markdown And text to follow([*back to text*](exercises.ipynbexercise-0))**Exercise 2 (exercises)**This is a normal exercise([*back to text*](exercises.ipynbexercise-1))**Question 3**I'm a function with a label and a different titleDefine a function named `var` that takes a list (call it `x`) andcomputes the variance. This function should use the mean function that wedefined earlier.Hint: $ \text{variance} = \frac{1}{N} \sum_i (x_i - \text{mean}(x))^2 $ ###Code # your code here ###Output _____no_output_____ ###Markdown Exercise List (all)**Exercise 1**This is a new exercise. It should be repeated again in the list below, in addition to exercises from other files**Exercise 1**This is a new exercise. It should be repeated again in the list below, in addition to exercises from other files([*back to text*](exercise-0))**Exercise 1 (exercise_list_labels)**This is an exercise from the `exercise_list_labels` file([*back to text*](exercise_list_labels.ipynbexercise-0))**Exercise 1 (exercises)**This is a note that has some _italic_ and **bold** embedded- list - in - exercise ```pythondef foobar(x, y, z): print(x, y, z)``` And text after the code blockbelow is something that should be a real code block ###Code def foobar(x, y, z): print(x, y, z) ###Output _____no_output_____ ###Markdown And text to follow([*back to text*](exercises.ipynbexercise-0))**Exercise 2 (exercises)**This is a normal exercise([*back to text*](exercises.ipynbexercise-1))**Question 3**I'm a function with a label and a different titleDefine a function named `var` that takes a list (call it `x`) andcomputes the variance. This function should use the mean function that wedefined earlier.Hint: $ \text{variance} = \frac{1}{N} \sum_i (x_i - \text{mean}(x))^2 $ ###Code # your code here ###Output _____no_output_____ ###Markdown Exercise List (all)**Exercise 1**This is a new exercise. It should be repeated again in the list below, in addition to exercises from other files**Exercise 1**This is a new exercise. It should be repeated again in the list below, in addition to exercises from other files([*back to text*](exercise-0))**Exercise 1 (exercise_list_labels)**This is an exercise from the `exercise_list_labels` file([*back to text*](exercise_list_labels.ipynbexercise-0))**Exercise 1 (exercises)**This is a note that has some _italic_ and **bold** embedded- list - in - exercise ```pythondef foobar(x, y, z): print(x, y, z)``` And text after the code blockbelow is something that should be a real code block ###Code def foobar(x, y, z): print(x, y, z) ###Output _____no_output_____ ###Markdown And text to follow([*back to text*](exercises.ipynbexercise-0))**Exercise 2 (exercises)**This is a normal exercise([*back to text*](exercises.ipynbexercise-1))**Question 3**I'm a function with a label and a different titleDefine a function named `var` that takes a list (call it `x`) andcomputes the variance. This function should use the mean function that wedefined earlier.Hint: $ \text{variance} = \frac{1}{N} \sum_i (x_i - \text{mean}(x))^2 $ ###Code # your code here ###Output _____no_output_____
deeplearning1/nbs/dogscats-ensemble.ipynb
###Markdown Setup ###Code path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. Found 0 images belonging to 0 classes. ###Markdown In this notebook we're going to create an ensemble of models and use their average as our predictions. For each ensemble, we're going to follow our usual fine-tuning steps:1) Create a model that retrains just the last layer2) Add this to a model containing all VGG layers except the last layer3) Fine-tune just the dense layers of this model (pre-computing the convolutional layers)4) Add data augmentation, fine-tuning the dense layers without pre-computation.So first, we need to create our VGG model and pre-compute the output of the conv layers: ###Code model = Vgg16().model conv_layers,fc_layers = split_at(model, Conv2D) conv_model = Sequential(conv_layers) val_features = conv_model.predict_generator(val_batches,verbose=1) val_features.shape save_array(model_path + 'valid_convlayer_features.bc', val_features) len(batches) # 训练集上有360步,360批次 batches.n batches.next()[0].shape,batches.next()[1].shape batches_2 = get_data(path+'train') batches_2.shape # 因为显存不够只要分批次处理 训练集 batches_2_1 = batches_2[:10000] batches_2_2 = batches_2[10000:20000] batches_2_3 = batches_2[20000:] batches_2_1.shape,batches_2_2.shape,batches_2_3.shape trn_features_1 = conv_model.predict(batches_2_1,verbose=1) trn_features_2 = conv_model.predict(batches_2_2,verbose=1) trn_features_3 = conv_model.predict(batches_2_3,verbose=1) trn_features_1.shape,trn_features_2.shape,trn_features_3.shape # 存到硬盘里,还是内存太小,合并不了。 save_array(model_path + 'train_convlayer_features_1.bc', trn_features_1) save_array(model_path + 'train_convlayer_features_2.bc', trn_features_2) save_array(model_path + 'train_convlayer_features_3.bc', trn_features_3) trn_features = np.concatenate((trn_features_1,trn_features_2,trn_features_3)) save_array(model_path + 'train_convlayer_features.bc', trn_features) trn_features = conv_model.predict_generator(batches,verbose=1) # 预计算卷积层的输出 save_array(model_path + 'train_convlayer_features.bc', trn_features) trn_features.shape ###Output _____no_output_____ ###Markdown In the future we can just load these precomputed features: ###Code trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') ###Output _____no_output_____ ###Markdown We can also save some time by pre-computing the training and validation arrays with the image decoding and resizing already done: ###Code trn = get_data(path+'train') save_array(model_path+'train_data.bc', trn) val = get_data(path+'valid') save_array(model_path+'valid_data.bc', val) ###Output _____no_output_____ ###Markdown In the future we can just load these resized images: ###Code trn = load_array(model_path+'train_data.bc') val = load_array(model_path+'valid_data.bc') ###Output _____no_output_____ ###Markdown Finally, we can precompute the output of all but the last dropout and dense layers, for creating the first stage of the model: ###Code model.summary() model.pop() model.pop() model.summary() ll_val_feat = model.predict_generator(val_batches) ll_feat = model.predict_generator(batches) save_array(model_path + 'train_ll_feat.bc', ll_feat) save_array(model_path + 'valid_ll_feat.bc', ll_val_feat) ll_feat = load_array(model_path+ 'train_ll_feat.bc') ll_val_feat = load_array(model_path + 'valid_ll_feat.bc') ###Output _____no_output_____ ###Markdown ...and let's also grab the test data, for when we need to submit: ###Code test = get_data(path+'test') save_array(model_path+'test_data.bc', test) test = load_array(model_path+'test_data.bc') ###Output _____no_output_____ ###Markdown Last layer The functions automate creating a model that trains the last layer from scratch, and then adds those new layers on to the main model. ###Code def get_ll_layers(): return [ BatchNormalization(input_shape=(4096,)), Dropout(0.5), Dense(2, activation='softmax') ] def train_last_layer(i): ll_layers = get_ll_layers() ll_model = Sequential(ll_layers) ll_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) ll_model.optimizer.lr=1e-5 ll_model.fit(ll_feat, trn_labels, validation_data=(ll_val_feat, val_labels), epochs=12) ll_model.optimizer.lr=1e-7 ll_model.fit(ll_feat, trn_labels, validation_data=(ll_val_feat, val_labels), epochs=1) ll_model.save_weights(model_path+'ll_bn' + i + '.h5') vgg = Vgg16() model = vgg.model model.pop(); model.pop(); model.pop() for layer in model.layers: layer.trainable=False model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) ll_layers = get_ll_layers() for layer in ll_layers: model.add(layer) for l1,l2 in zip(ll_model.layers, model.layers[-3:]): l2.set_weights(l1.get_weights()) model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) model.save_weights(model_path+'bn' + i + '.h5') return model ###Output _____no_output_____ ###Markdown Dense model ###Code def get_conv_model(model): layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) fc_layers = layers[last_conv_idx+1:] return conv_model, fc_layers, last_conv_idx def get_fc_layers(p, in_shape): return [ MaxPooling2D(input_shape=in_shape), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(2, activation='softmax') ] def train_dense_layers(i, model): conv_model, fc_layers, last_conv_idx = get_conv_model(model) conv_shape = conv_model.output_shape[1:] fc_model = Sequential(get_fc_layers(0.5, conv_shape)) for l1,l2 in zip(fc_model.layers, fc_layers): weights = l2.get_weights() l1.set_weights(weights) fc_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) fc_model.fit(trn_features, trn_labels, epochs=2, batch_size=batch_size, validation_data=(val_features, val_labels)) gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.05, width_zoom_range=0.05, zoom_range=0.05, channel_shift_range=10, height_shift_range=0.05, shear_range=0.05, horizontal_flip=True) batches = gen.flow(trn, trn_labels, batch_size=batch_size) val_batches = image.ImageDataGenerator().flow(val, val_labels, shuffle=False, batch_size=batch_size) for layer in conv_model.layers: layer.trainable = False for layer in get_fc_layers(0.5, conv_shape): conv_model.add(layer) for l1,l2 in zip(conv_model.layers[last_conv_idx+1:], fc_model.layers): l1.set_weights(l2.get_weights()) conv_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) conv_model.save_weights(model_path+'no_dropout_bn' + i + '.h5') conv_model.fit_generator(batches, epochs=1, validation_data=val_batches) for layer in conv_model.layers[16:]: layer.trainable = True conv_model.fit_generator(batches, epochs=8, validation_data=val_batches) conv_model.optimizer.lr = 1e-7 conv_model.fit_generator(batches, epochs=10, validation_data=val_batches) conv_model.save_weights(model_path + 'aug' + i + '.h5') ###Output _____no_output_____ ###Markdown Build ensemble ###Code for i in range(5): i = str(i) model = train_last_layer(i) train_dense_layers(i, model) ###Output Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 3s 132us/step - loss: 0.7336 - acc: 0.7085 - val_loss: 0.4151 - val_acc: 0.8225 Epoch 2/12 23000/23000 [==============================] - 3s 113us/step - loss: 0.5241 - acc: 0.7948 - val_loss: 0.3434 - val_acc: 0.8595 Epoch 3/12 23000/23000 [==============================] - 3s 112us/step - loss: 0.4644 - acc: 0.8248 - val_loss: 0.3068 - val_acc: 0.8760 Epoch 4/12 23000/23000 [==============================] - 3s 114us/step - loss: 0.4290 - acc: 0.8403 - val_loss: 0.2847 - val_acc: 0.8885 Epoch 5/12 23000/23000 [==============================] - 3s 113us/step - loss: 0.3931 - acc: 0.8518 - val_loss: 0.2677 - val_acc: 0.8930 Epoch 6/12 23000/23000 [==============================] - 3s 114us/step - loss: 0.3749 - acc: 0.8591 - val_loss: 0.2555 - val_acc: 0.8975 Epoch 7/12 23000/23000 [==============================] - 3s 115us/step - loss: 0.3713 - acc: 0.8648 - val_loss: 0.2453 - val_acc: 0.9020 Epoch 8/12 23000/23000 [==============================] - 3s 116us/step - loss: 0.3546 - acc: 0.8691 - val_loss: 0.2381 - val_acc: 0.9045 Epoch 9/12 23000/23000 [==============================] - 3s 115us/step - loss: 0.3417 - acc: 0.8734 - val_loss: 0.2321 - val_acc: 0.9070 Epoch 10/12 23000/23000 [==============================] - 3s 115us/step - loss: 0.3389 - acc: 0.8763 - val_loss: 0.2259 - val_acc: 0.9115 Epoch 11/12 23000/23000 [==============================] - 3s 110us/step - loss: 0.3200 - acc: 0.8807 - val_loss: 0.2207 - val_acc: 0.9120 Epoch 12/12 23000/23000 [==============================] - 3s 115us/step - loss: 0.3246 - acc: 0.8789 - val_loss: 0.2165 - val_acc: 0.9130 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 3s 113us/step - loss: 0.3130 - acc: 0.8828 - val_loss: 0.2124 - val_acc: 0.9145 ###Markdown Combine ensemble and test ###Code ens_model = vgg_ft(2) for layer in ens_model.layers: layer.trainable=True def get_ens_pred(arr, fname): ens_pred = [] for i in range(5): i = str(i) ens_model.load_weights('{}{}{}.h5'.format(model_path, fname, i)) preds = ens_model.predict(arr, batch_size=batch_size) ens_pred.append(preds) return ens_pred val_pred2 = get_ens_pred(val, 'aug') val_avg_preds2 = np.stack(val_pred2).mean(axis=0) categorical_accuracy(val_labels, val_avg_preds2).eval() ###Output _____no_output_____ ###Markdown Setup ###Code path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. Found 0 images belonging to 0 classes. ###Markdown In this notebook we're going to create an ensemble of models and use their average as our predictions. For each ensemble, we're going to follow our usual fine-tuning steps:1) Create a model that retrains just the last layer2) Add this to a model containing all VGG layers except the last layer3) Fine-tune just the dense layers of this model (pre-computing the convolutional layers)4) Add data augmentation, fine-tuning the dense layers without pre-computation.So first, we need to create our VGG model and pre-compute the output of the conv layers: ###Code model = Vgg16().model conv_layers,fc_layers = split_at(model, Convolution2D) conv_model = Sequential(conv_layers) val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) ###Output _____no_output_____ ###Markdown In the future we can just load these precomputed features: ###Code trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') ###Output _____no_output_____ ###Markdown We can also save some time by pre-computing the training and validation arrays with the image decoding and resizing already done: ###Code trn = get_data(path+'train') val = get_data(path+'valid') save_array(model_path+'train_data.bc', trn) save_array(model_path+'valid_data.bc', val) ###Output _____no_output_____ ###Markdown In the future we can just load these resized images: ###Code trn = load_array(model_path+'train_data.bc') val = load_array(model_path+'valid_data.bc') ###Output _____no_output_____ ###Markdown Finally, we can precompute the output of all but the last dropout and dense layers, for creating the first stage of the model: ###Code model.pop() model.pop() ll_val_feat = model.predict_generator(val_batches, val_batches.nb_sample) ll_feat = model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_ll_feat.bc', ll_feat) save_array(model_path + 'valid_ll_feat.bc', ll_val_feat) ll_feat = load_array(model_path+ 'train_ll_feat.bc') ll_val_feat = load_array(model_path + 'valid_ll_feat.bc') ###Output _____no_output_____ ###Markdown ...and let's also grab the test data, for when we need to submit: ###Code test = get_data(path+'test') save_array(model_path+'test_data.bc', test) test = load_array(model_path+'test_data.bc') ###Output _____no_output_____ ###Markdown Last layer The functions automate creating a model that trains the last layer from scratch, and then adds those new layers on to the main model. ###Code def get_ll_layers(): return [ BatchNormalization(input_shape=(4096,)), Dropout(0.5), Dense(2, activation='softmax') ] def train_last_layer(i): ll_layers = get_ll_layers() ll_model = Sequential(ll_layers) ll_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) ll_model.optimizer.lr=1e-5 ll_model.fit(ll_feat, trn_labels, validation_data=(ll_val_feat, val_labels), nb_epoch=12) ll_model.optimizer.lr=1e-7 ll_model.fit(ll_feat, trn_labels, validation_data=(ll_val_feat, val_labels), nb_epoch=1) ll_model.save_weights(model_path+'ll_bn' + i + '.h5') vgg = Vgg16() model = vgg.model model.pop(); model.pop(); model.pop() for layer in model.layers: layer.trainable=False model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) ll_layers = get_ll_layers() for layer in ll_layers: model.add(layer) for l1,l2 in zip(ll_model.layers, model.layers[-3:]): l2.set_weights(l1.get_weights()) model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) model.save_weights(model_path+'bn' + i + '.h5') return model ###Output _____no_output_____ ###Markdown Dense model ###Code def get_conv_model(model): layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) fc_layers = layers[last_conv_idx+1:] return conv_model, fc_layers, last_conv_idx def get_fc_layers(p, in_shape): return [ MaxPooling2D(input_shape=in_shape), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(2, activation='softmax') ] def train_dense_layers(i, model): conv_model, fc_layers, last_conv_idx = get_conv_model(model) conv_shape = conv_model.output_shape[1:] fc_model = Sequential(get_fc_layers(0.5, conv_shape)) for l1,l2 in zip(fc_model.layers, fc_layers): weights = l2.get_weights() l1.set_weights(weights) fc_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) fc_model.fit(trn_features, trn_labels, nb_epoch=2, batch_size=batch_size, validation_data=(val_features, val_labels)) gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.05, width_zoom_range=0.05, zoom_range=0.05, channel_shift_range=10, height_shift_range=0.05, shear_range=0.05, horizontal_flip=True) batches = gen.flow(trn, trn_labels, batch_size=batch_size) val_batches = image.ImageDataGenerator().flow(val, val_labels, shuffle=False, batch_size=batch_size) for layer in conv_model.layers: layer.trainable = False for layer in get_fc_layers(0.5, conv_shape): conv_model.add(layer) for l1,l2 in zip(conv_model.layers[last_conv_idx+1:], fc_model.layers): l1.set_weights(l2.get_weights()) conv_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) conv_model.save_weights(model_path+'no_dropout_bn' + i + '.h5') conv_model.fit_generator(batches, samples_per_epoch=batches.N, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.N) for layer in conv_model.layers[16:]: layer.trainable = True conv_model.fit_generator(batches, samples_per_epoch=batches.N, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.N) conv_model.optimizer.lr = 1e-7 conv_model.fit_generator(batches, samples_per_epoch=batches.N, nb_epoch=10, validation_data=val_batches, nb_val_samples=val_batches.N) conv_model.save_weights(model_path + 'aug' + i + '.h5') ###Output _____no_output_____ ###Markdown Build ensemble ###Code for i in range(5): i = str(i) model = train_last_layer(i) train_dense_layers(i, model) ###Output Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 0s - loss: 0.5184 - acc: 0.7895 - val_loss: 0.1549 - val_acc: 0.9440 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.1984 - acc: 0.9237 - val_loss: 0.0941 - val_acc: 0.9670 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1524 - acc: 0.9426 - val_loss: 0.0762 - val_acc: 0.9735 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1247 - acc: 0.9542 - val_loss: 0.0662 - val_acc: 0.9740 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1128 - acc: 0.9567 - val_loss: 0.0609 - val_acc: 0.9760 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1043 - acc: 0.9635 - val_loss: 0.0560 - val_acc: 0.9775 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.1010 - acc: 0.9640 - val_loss: 0.0548 - val_acc: 0.9790 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0980 - acc: 0.9650 - val_loss: 0.0526 - val_acc: 0.9780 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0926 - acc: 0.9656 - val_loss: 0.0513 - val_acc: 0.9785 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0881 - acc: 0.9680 - val_loss: 0.0500 - val_acc: 0.9795 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0933 - acc: 0.9666 - val_loss: 0.0497 - val_acc: 0.9800 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0842 - acc: 0.9693 - val_loss: 0.0484 - val_acc: 0.9805 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0824 - acc: 0.9696 - val_loss: 0.0486 - val_acc: 0.9805 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0798 - acc: 0.9719 - val_loss: 0.0500 - val_acc: 0.9830 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0415 - acc: 0.9853 - val_loss: 0.0551 - val_acc: 0.9840 Epoch 1/1 23000/23000 [==============================] - 271s - loss: 0.0559 - acc: 0.9814 - val_loss: 0.0578 - val_acc: 0.9825 Epoch 1/8 23000/23000 [==============================] - 271s - loss: 0.0515 - acc: 0.9834 - val_loss: 0.0645 - val_acc: 0.9860 Epoch 2/8 23000/23000 [==============================] - 271s - loss: 0.0385 - acc: 0.9875 - val_loss: 0.0670 - val_acc: 0.9850 Epoch 3/8 23000/23000 [==============================] - 271s - loss: 0.0313 - acc: 0.9890 - val_loss: 0.0715 - val_acc: 0.9850 Epoch 4/8 23000/23000 [==============================] - 271s - loss: 0.0287 - acc: 0.9903 - val_loss: 0.0733 - val_acc: 0.9840 Epoch 5/8 23000/23000 [==============================] - 271s - loss: 0.0244 - acc: 0.9924 - val_loss: 0.0773 - val_acc: 0.9840 Epoch 6/8 23000/23000 [==============================] - 271s - loss: 0.0205 - acc: 0.9927 - val_loss: 0.0900 - val_acc: 0.9845 Epoch 7/8 23000/23000 [==============================] - 271s - loss: 0.0209 - acc: 0.9929 - val_loss: 0.0860 - val_acc: 0.9865 Epoch 8/8 23000/23000 [==============================] - 420s - loss: 0.0186 - acc: 0.9930 - val_loss: 0.0923 - val_acc: 0.9845 Epoch 1/10 23000/23000 [==============================] - 315s - loss: 0.0196 - acc: 0.9930 - val_loss: 0.0909 - val_acc: 0.9845 Epoch 2/10 23000/23000 [==============================] - 362s - loss: 0.0165 - acc: 0.9945 - val_loss: 0.1023 - val_acc: 0.9830 Epoch 3/10 23000/23000 [==============================] - 447s - loss: 0.0179 - acc: 0.9940 - val_loss: 0.0871 - val_acc: 0.9845 Epoch 4/10 23000/23000 [==============================] - 601s - loss: 0.0112 - acc: 0.9960 - val_loss: 0.1030 - val_acc: 0.9830 Epoch 5/10 23000/23000 [==============================] - 528s - loss: 0.0130 - acc: 0.9956 - val_loss: 0.0946 - val_acc: 0.9830 Epoch 6/10 23000/23000 [==============================] - 657s - loss: 0.0110 - acc: 0.9961 - val_loss: 0.0904 - val_acc: 0.9850 Epoch 7/10 23000/23000 [==============================] - 621s - loss: 0.0116 - acc: 0.9963 - val_loss: 0.0872 - val_acc: 0.9865 Epoch 8/10 23000/23000 [==============================] - 603s - loss: 0.0118 - acc: 0.9960 - val_loss: 0.0813 - val_acc: 0.9870 Epoch 9/10 23000/23000 [==============================] - 616s - loss: 0.0100 - acc: 0.9967 - val_loss: 0.1053 - val_acc: 0.9835 Epoch 10/10 23000/23000 [==============================] - 661s - loss: 0.0098 - acc: 0.9968 - val_loss: 0.0970 - val_acc: 0.9840 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 0s - loss: 0.5106 - acc: 0.7935 - val_loss: 0.1504 - val_acc: 0.9455 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.2005 - acc: 0.9241 - val_loss: 0.0890 - val_acc: 0.9680 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1465 - acc: 0.9444 - val_loss: 0.0714 - val_acc: 0.9745 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1280 - acc: 0.9540 - val_loss: 0.0614 - val_acc: 0.9765 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1131 - acc: 0.9586 - val_loss: 0.0561 - val_acc: 0.9795 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1079 - acc: 0.9620 - val_loss: 0.0515 - val_acc: 0.9795 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.0998 - acc: 0.9631 - val_loss: 0.0484 - val_acc: 0.9825 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0947 - acc: 0.9673 - val_loss: 0.0457 - val_acc: 0.9845 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0913 - acc: 0.9676 - val_loss: 0.0449 - val_acc: 0.9855 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0921 - acc: 0.9670 - val_loss: 0.0451 - val_acc: 0.9845 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0893 - acc: 0.9681 - val_loss: 0.0441 - val_acc: 0.9840 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0836 - acc: 0.9691 - val_loss: 0.0428 - val_acc: 0.9850 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0833 - acc: 0.9718 - val_loss: 0.0434 - val_acc: 0.9850 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0814 - acc: 0.9736 - val_loss: 0.0463 - val_acc: 0.9850 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0389 - acc: 0.9859 - val_loss: 0.0493 - val_acc: 0.9850 Epoch 1/1 23000/23000 [==============================] - 271s - loss: 0.0613 - acc: 0.9807 - val_loss: 0.0563 - val_acc: 0.9855 Epoch 1/8 23000/23000 [==============================] - 325s - loss: 0.0450 - acc: 0.9860 - val_loss: 0.0685 - val_acc: 0.9840 Epoch 2/8 23000/23000 [==============================] - 766s - loss: 0.0364 - acc: 0.9877 - val_loss: 0.0616 - val_acc: 0.9845 Epoch 3/8 23000/23000 [==============================] - 600s - loss: 0.0338 - acc: 0.9891 - val_loss: 0.0585 - val_acc: 0.9845 Epoch 4/8 23000/23000 [==============================] - 634s - loss: 0.0288 - acc: 0.9903 - val_loss: 0.0740 - val_acc: 0.9845 Epoch 5/8 23000/23000 [==============================] - 791s - loss: 0.0265 - acc: 0.9904 - val_loss: 0.0789 - val_acc: 0.9840 Epoch 6/8 23000/23000 [==============================] - 780s - loss: 0.0254 - acc: 0.9909 - val_loss: 0.0853 - val_acc: 0.9855 Epoch 7/8 23000/23000 [==============================] - 680s - loss: 0.0180 - acc: 0.9937 - val_loss: 0.0747 - val_acc: 0.9870 Epoch 8/8 23000/23000 [==============================] - 776s - loss: 0.0191 - acc: 0.9939 - val_loss: 0.0871 - val_acc: 0.9845 Epoch 1/10 23000/23000 [==============================] - 712s - loss: 0.0191 - acc: 0.9929 - val_loss: 0.0943 - val_acc: 0.9855 Epoch 2/10 23000/23000 [==============================] - 679s - loss: 0.0175 - acc: 0.9946 - val_loss: 0.0723 - val_acc: 0.9850 Epoch 3/10 23000/23000 [==============================] - 640s - loss: 0.0148 - acc: 0.9949 - val_loss: 0.0756 - val_acc: 0.9845 Epoch 4/10 23000/23000 [==============================] - 761s - loss: 0.0147 - acc: 0.9953 - val_loss: 0.0772 - val_acc: 0.9850 Epoch 5/10 23000/23000 [==============================] - 733s - loss: 0.0163 - acc: 0.9946 - val_loss: 0.0931 - val_acc: 0.9830 Epoch 6/10 23000/23000 [==============================] - 574s - loss: 0.0107 - acc: 0.9967 - val_loss: 0.0874 - val_acc: 0.9845 Epoch 7/10 23000/23000 [==============================] - 611s - loss: 0.0123 - acc: 0.9958 - val_loss: 0.0918 - val_acc: 0.9855 Epoch 8/10 23000/23000 [==============================] - 668s - loss: 0.0098 - acc: 0.9965 - val_loss: 0.0896 - val_acc: 0.9855 Epoch 9/10 23000/23000 [==============================] - 624s - loss: 0.0096 - acc: 0.9964 - val_loss: 0.1012 - val_acc: 0.9850 Epoch 10/10 23000/23000 [==============================] - 747s - loss: 0.0113 - acc: 0.9960 - val_loss: 0.0961 - val_acc: 0.9835 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 1s - loss: 0.5167 - acc: 0.7867 - val_loss: 0.1299 - val_acc: 0.9555 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.1922 - acc: 0.9265 - val_loss: 0.0803 - val_acc: 0.9695 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1461 - acc: 0.9454 - val_loss: 0.0646 - val_acc: 0.9745 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1255 - acc: 0.9536 - val_loss: 0.0543 - val_acc: 0.9790 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1113 - acc: 0.9608 - val_loss: 0.0505 - val_acc: 0.9820 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1058 - acc: 0.9607 - val_loss: 0.0464 - val_acc: 0.9825 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.0957 - acc: 0.9654 - val_loss: 0.0448 - val_acc: 0.9840 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0964 - acc: 0.9657 - val_loss: 0.0427 - val_acc: 0.9850 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0996 - acc: 0.9662 - val_loss: 0.0420 - val_acc: 0.9860 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0931 - acc: 0.9670 - val_loss: 0.0408 - val_acc: 0.9855 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0899 - acc: 0.9680 - val_loss: 0.0395 - val_acc: 0.9860 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0837 - acc: 0.9717 - val_loss: 0.0390 - val_acc: 0.9860 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0861 - acc: 0.9703 - val_loss: 0.0391 - val_acc: 0.9865 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0796 - acc: 0.9735 - val_loss: 0.0382 - val_acc: 0.9855 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0353 - acc: 0.9874 - val_loss: 0.0364 - val_acc: 0.9880 Epoch 1/1 23000/23000 [==============================] - 271s - loss: 0.0622 - acc: 0.9802 - val_loss: 0.0490 - val_acc: 0.9870 Epoch 1/8 23000/23000 [==============================] - 773s - loss: 0.0426 - acc: 0.9856 - val_loss: 0.0442 - val_acc: 0.9885 Epoch 2/8 23000/23000 [==============================] - 774s - loss: 0.0394 - acc: 0.9864 - val_loss: 0.0501 - val_acc: 0.9885 Epoch 3/8 23000/23000 [==============================] - 687s - loss: 0.0329 - acc: 0.9881 - val_loss: 0.0500 - val_acc: 0.9875 Epoch 4/8 23000/23000 [==============================] - 655s - loss: 0.0292 - acc: 0.9900 - val_loss: 0.0535 - val_acc: 0.9870 Epoch 5/8 23000/23000 [==============================] - 791s - loss: 0.0268 - acc: 0.9914 - val_loss: 0.0605 - val_acc: 0.9855 Epoch 6/8 23000/23000 [==============================] - 789s - loss: 0.0208 - acc: 0.9926 - val_loss: 0.0591 - val_acc: 0.9850 Epoch 7/8 23000/23000 [==============================] - 798s - loss: 0.0191 - acc: 0.9931 - val_loss: 0.0638 - val_acc: 0.9860 Epoch 8/8 23000/23000 [==============================] - 708s - loss: 0.0192 - acc: 0.9932 - val_loss: 0.0597 - val_acc: 0.9850 Epoch 1/10 23000/23000 [==============================] - 606s - loss: 0.0178 - acc: 0.9942 - val_loss: 0.0620 - val_acc: 0.9860 Epoch 2/10 23000/23000 [==============================] - 756s - loss: 0.0158 - acc: 0.9941 - val_loss: 0.0694 - val_acc: 0.9850 Epoch 3/10 23000/23000 [==============================] - 418s - loss: 0.0176 - acc: 0.9939 - val_loss: 0.0641 - val_acc: 0.9855 Epoch 4/10 23000/23000 [==============================] - 271s - loss: 0.0118 - acc: 0.9958 - val_loss: 0.0623 - val_acc: 0.9840 Epoch 5/10 23000/23000 [==============================] - 271s - loss: 0.0150 - acc: 0.9947 - val_loss: 0.0649 - val_acc: 0.9865 Epoch 6/10 23000/23000 [==============================] - 271s - loss: 0.0119 - acc: 0.9961 - val_loss: 0.0595 - val_acc: 0.9880 Epoch 7/10 23000/23000 [==============================] - 304s - loss: 0.0121 - acc: 0.9957 - val_loss: 0.0668 - val_acc: 0.9885 Epoch 8/10 23000/23000 [==============================] - 273s - loss: 0.0124 - acc: 0.9960 - val_loss: 0.0619 - val_acc: 0.9885 Epoch 9/10 23000/23000 [==============================] - 271s - loss: 0.0099 - acc: 0.9963 - val_loss: 0.0649 - val_acc: 0.9865 Epoch 10/10 23000/23000 [==============================] - 273s - loss: 0.0091 - acc: 0.9970 - val_loss: 0.0628 - val_acc: 0.9890 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 0s - loss: 0.4585 - acc: 0.8130 - val_loss: 0.1306 - val_acc: 0.9515 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.1920 - acc: 0.9270 - val_loss: 0.0863 - val_acc: 0.9655 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1504 - acc: 0.9450 - val_loss: 0.0705 - val_acc: 0.9740 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1275 - acc: 0.9529 - val_loss: 0.0592 - val_acc: 0.9795 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1190 - acc: 0.9555 - val_loss: 0.0555 - val_acc: 0.9815 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1068 - acc: 0.9609 - val_loss: 0.0536 - val_acc: 0.9805 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.1003 - acc: 0.9624 - val_loss: 0.0496 - val_acc: 0.9830 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0979 - acc: 0.9660 - val_loss: 0.0482 - val_acc: 0.9825 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0913 - acc: 0.9678 - val_loss: 0.0475 - val_acc: 0.9830 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0917 - acc: 0.9666 - val_loss: 0.0458 - val_acc: 0.9825 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0980 - acc: 0.9665 - val_loss: 0.0454 - val_acc: 0.9840 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0919 - acc: 0.9675 - val_loss: 0.0443 - val_acc: 0.9840 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0883 - acc: 0.9685 - val_loss: 0.0440 - val_acc: 0.9850 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0825 - acc: 0.9720 - val_loss: 0.0437 - val_acc: 0.9850 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0359 - acc: 0.9874 - val_loss: 0.0474 - val_acc: 0.9850 Epoch 1/1 23000/23000 [==============================] - 272s - loss: 0.0581 - acc: 0.9817 - val_loss: 0.0562 - val_acc: 0.9850 Epoch 1/8 23000/23000 [==============================] - 520s - loss: 0.0486 - acc: 0.9833 - val_loss: 0.0590 - val_acc: 0.9830 Epoch 2/8 23000/23000 [==============================] - 745s - loss: 0.0379 - acc: 0.9867 - val_loss: 0.0595 - val_acc: 0.9840 Epoch 3/8 23000/23000 [==============================] - 736s - loss: 0.0329 - acc: 0.9881 - val_loss: 0.0628 - val_acc: 0.9840 Epoch 4/8 23000/23000 [==============================] - 708s - loss: 0.0260 - acc: 0.9903 - val_loss: 0.0722 - val_acc: 0.9855 Epoch 5/8 23000/23000 [==============================] - 700s - loss: 0.0250 - acc: 0.9921 - val_loss: 0.0734 - val_acc: 0.9840 Epoch 6/8 23000/23000 [==============================] - 802s - loss: 0.0212 - acc: 0.9923 - val_loss: 0.0721 - val_acc: 0.9845 Epoch 7/8 23000/23000 [==============================] - 765s - loss: 0.0211 - acc: 0.9928 - val_loss: 0.0772 - val_acc: 0.9835 Epoch 8/8 23000/23000 [==============================] - 743s - loss: 0.0185 - acc: 0.9933 - val_loss: 0.0756 - val_acc: 0.9835 Epoch 1/10 23000/23000 [==============================] - 782s - loss: 0.0168 - acc: 0.9941 - val_loss: 0.0815 - val_acc: 0.9860 Epoch 2/10 23000/23000 [==============================] - 580s - loss: 0.0155 - acc: 0.9942 - val_loss: 0.0771 - val_acc: 0.9840 Epoch 3/10 23000/23000 [==============================] - 654s - loss: 0.0142 - acc: 0.9954 - val_loss: 0.0789 - val_acc: 0.9850 Epoch 4/10 23000/23000 [==============================] - 692s - loss: 0.0141 - acc: 0.9955 - val_loss: 0.0716 - val_acc: 0.9870 Epoch 5/10 23000/23000 [==============================] - 607s - loss: 0.0120 - acc: 0.9959 - val_loss: 0.0757 - val_acc: 0.9850 Epoch 6/10 23000/23000 [==============================] - 789s - loss: 0.0129 - acc: 0.9956 - val_loss: 0.0741 - val_acc: 0.9860 Epoch 7/10 23000/23000 [==============================] - 767s - loss: 0.0111 - acc: 0.9960 - val_loss: 0.0747 - val_acc: 0.9865 Epoch 8/10 23000/23000 [==============================] - 557s - loss: 0.0103 - acc: 0.9967 - val_loss: 0.0774 - val_acc: 0.9870 Epoch 9/10 23000/23000 [==============================] - 521s - loss: 0.0106 - acc: 0.9962 - val_loss: 0.0855 - val_acc: 0.9855 Epoch 10/10 23000/23000 [==============================] - 484s - loss: 0.0095 - acc: 0.9970 - val_loss: 0.0780 - val_acc: 0.9850 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 0s - loss: 0.5435 - acc: 0.7783 - val_loss: 0.1669 - val_acc: 0.9440 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.2054 - acc: 0.9227 - val_loss: 0.0999 - val_acc: 0.9675 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1549 - acc: 0.9405 - val_loss: 0.0763 - val_acc: 0.9725 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1327 - acc: 0.9520 - val_loss: 0.0642 - val_acc: 0.9755 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1147 - acc: 0.9573 - val_loss: 0.0590 - val_acc: 0.9790 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1078 - acc: 0.9605 - val_loss: 0.0545 - val_acc: 0.9815 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.1001 - acc: 0.9631 - val_loss: 0.0526 - val_acc: 0.9820 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0977 - acc: 0.9654 - val_loss: 0.0515 - val_acc: 0.9815 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0937 - acc: 0.9660 - val_loss: 0.0497 - val_acc: 0.9825 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0942 - acc: 0.9683 - val_loss: 0.0489 - val_acc: 0.9835 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0904 - acc: 0.9687 - val_loss: 0.0473 - val_acc: 0.9830 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0855 - acc: 0.9689 - val_loss: 0.0469 - val_acc: 0.9840 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0861 - acc: 0.9685 - val_loss: 0.0470 - val_acc: 0.9840 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0846 - acc: 0.9719 - val_loss: 0.0510 - val_acc: 0.9845 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0392 - acc: 0.9866 - val_loss: 0.0548 - val_acc: 0.9860 Epoch 1/1 23000/23000 [==============================] - 273s - loss: 0.0585 - acc: 0.9800 - val_loss: 0.0608 - val_acc: 0.9875 Epoch 1/8 23000/23000 [==============================] - 677s - loss: 0.0456 - acc: 0.9845 - val_loss: 0.0690 - val_acc: 0.9840 Epoch 2/8 23000/23000 [==============================] - 654s - loss: 0.0398 - acc: 0.9859 - val_loss: 0.0763 - val_acc: 0.9835 Epoch 3/8 23000/23000 [==============================] - 711s - loss: 0.0304 - acc: 0.9894 - val_loss: 0.0662 - val_acc: 0.9840 Epoch 4/8 23000/23000 [==============================] - 646s - loss: 0.0252 - acc: 0.9913 - val_loss: 0.0747 - val_acc: 0.9845 Epoch 5/8 23000/23000 [==============================] - 726s - loss: 0.0246 - acc: 0.9909 - val_loss: 0.0809 - val_acc: 0.9850 Epoch 6/8 23000/23000 [==============================] - 582s - loss: 0.0182 - acc: 0.9933 - val_loss: 0.0715 - val_acc: 0.9850 Epoch 7/8 23000/23000 [==============================] - 627s - loss: 0.0201 - acc: 0.9928 - val_loss: 0.0789 - val_acc: 0.9850 Epoch 8/8 23000/23000 [==============================] - 674s - loss: 0.0172 - acc: 0.9944 - val_loss: 0.0717 - val_acc: 0.9855 Epoch 1/10 23000/23000 [==============================] - 736s - loss: 0.0171 - acc: 0.9939 - val_loss: 0.0820 - val_acc: 0.9850 Epoch 2/10 23000/23000 [==============================] - 634s - loss: 0.0184 - acc: 0.9941 - val_loss: 0.0829 - val_acc: 0.9860 Epoch 3/10 23000/23000 [==============================] - 599s - loss: 0.0156 - acc: 0.9946 - val_loss: 0.0863 - val_acc: 0.9865 Epoch 4/10 23000/23000 [==============================] - 717s - loss: 0.0142 - acc: 0.9952 - val_loss: 0.0903 - val_acc: 0.9850 Epoch 5/10 23000/23000 [==============================] - 809s - loss: 0.0116 - acc: 0.9960 - val_loss: 0.0883 - val_acc: 0.9860 Epoch 6/10 23000/23000 [==============================] - 754s - loss: 0.0127 - acc: 0.9953 - val_loss: 0.0887 - val_acc: 0.9855 Epoch 7/10 23000/23000 [==============================] - 499s - loss: 0.0100 - acc: 0.9964 - val_loss: 0.0835 - val_acc: 0.9850 Epoch 8/10 23000/23000 [==============================] - 317s - loss: 0.0090 - acc: 0.9971 - val_loss: 0.0804 - val_acc: 0.9870 Epoch 9/10 23000/23000 [==============================] - 301s - loss: 0.0111 - acc: 0.9963 - val_loss: 0.0869 - val_acc: 0.9865 Epoch 10/10 23000/23000 [==============================] - 442s - loss: 0.0079 - acc: 0.9971 - val_loss: 0.0805 - val_acc: 0.9870 ###Markdown Combine ensemble and test ###Code ens_model = vgg_ft(2) for layer in ens_model.layers: layer.trainable=True def get_ens_pred(arr, fname): ens_pred = [] for i in range(5): i = str(i) ens_model.load_weights('{}{}{}.h5'.format(model_path, fname, i)) preds = ens_model.predict(arr, batch_size=batch_size) ens_pred.append(preds) return ens_pred val_pred2 = get_ens_pred(val, 'aug') val_avg_preds2 = np.stack(val_pred2).mean(axis=0) categorical_accuracy(val_labels, val_avg_preds2).eval() ###Output _____no_output_____ ###Markdown Setup ###Code path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=4 batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. Found 0 images belonging to 0 classes. ###Markdown In this notebook we're going to create an ensemble of models and use their average as our predictions. For each ensemble, we're going to follow our usual fine-tuning steps:1) Create a model that retrains just the last layer2) Add this to a model containing all VGG layers except the last layer3) Fine-tune just the dense layers of this model (pre-computing the convolutional layers)4) Add data augmentation, fine-tuning the dense layers without pre-computation.So first, we need to create our VGG model and pre-compute the output of the conv layers: ###Code model = Vgg16().model conv_layers,fc_layers = split_at(model, Convolution2D) conv_model = Sequential(conv_layers) val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) ###Output _____no_output_____ ###Markdown In the future we can just load these precomputed features: ###Code trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') ###Output _____no_output_____ ###Markdown We can also save some time by pre-computing the training and validation arrays with the image decoding and resizing already done: ###Code trn = get_data(path+'train') val = get_data(path+'valid') save_array(model_path+'train_data.bc', trn) save_array(model_path+'valid_data.bc', val) ###Output _____no_output_____ ###Markdown In the future we can just load these resized images: ###Code trn = load_array(model_path+'train_data.bc') val = load_array(model_path+'valid_data.bc') ###Output _____no_output_____ ###Markdown Finally, we can precompute the output of all but the last dropout and dense layers, for creating the first stage of the model: ###Code model.pop() model.pop() ll_val_feat = model.predict_generator(val_batches, val_batches.nb_sample) ll_feat = model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_ll_feat.bc', ll_feat) save_array(model_path + 'valid_ll_feat.bc', ll_val_feat) ll_feat = load_array(model_path+ 'train_ll_feat.bc') ll_val_feat = load_array(model_path + 'valid_ll_feat.bc') ###Output _____no_output_____ ###Markdown ...and let's also grab the test data, for when we need to submit: ###Code test = get_data(path+'test') save_array(model_path+'test_data.bc', test) test = load_array(model_path+'test_data.bc') ###Output _____no_output_____ ###Markdown Last layer The functions automate creating a model that trains the last layer from scratch, and then adds those new layers on to the main model. ###Code def get_ll_layers(): return [ BatchNormalization(input_shape=(4096,)), Dropout(0.5), Dense(2, activation='softmax') ] def train_last_layer(i): ll_layers = get_ll_layers() ll_model = Sequential(ll_layers) ll_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) ll_model.optimizer.lr=1e-5 ll_model.fit(ll_feat, trn_labels, validation_data=(ll_val_feat, val_labels), nb_epoch=12) ll_model.optimizer.lr=1e-7 ll_model.fit(ll_feat, trn_labels, validation_data=(ll_val_feat, val_labels), nb_epoch=1) ll_model.save_weights(model_path+'ll_bn' + i + '.h5') vgg = Vgg16() model = vgg.model model.pop(); model.pop(); model.pop() for layer in model.layers: layer.trainable=False model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) ll_layers = get_ll_layers() for layer in ll_layers: model.add(layer) for l1,l2 in zip(ll_model.layers, model.layers[-3:]): l2.set_weights(l1.get_weights()) model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) model.save_weights(model_path+'bn' + i + '.h5') return model ###Output _____no_output_____ ###Markdown Dense model ###Code def get_conv_model(model): layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) fc_layers = layers[last_conv_idx+1:] return conv_model, fc_layers, last_conv_idx def get_fc_layers(p, in_shape): return [ MaxPooling2D(input_shape=in_shape), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(2, activation='softmax') ] def train_dense_layers(i, model): conv_model, fc_layers, last_conv_idx = get_conv_model(model) conv_shape = conv_model.output_shape[1:] fc_model = Sequential(get_fc_layers(0.5, conv_shape)) for l1,l2 in zip(fc_model.layers, fc_layers): weights = l2.get_weights() l1.set_weights(weights) fc_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) fc_model.fit(trn_features, trn_labels, nb_epoch=2, batch_size=batch_size, validation_data=(val_features, val_labels)) gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.05, width_zoom_range=0.05, zoom_range=0.05, channel_shift_range=10, height_shift_range=0.05, shear_range=0.05, horizontal_flip=True) batches = gen.flow(trn, trn_labels, batch_size=batch_size) val_batches = image.ImageDataGenerator().flow(val, val_labels, shuffle=False, batch_size=batch_size) for layer in conv_model.layers: layer.trainable = False for layer in get_fc_layers(0.5, conv_shape): conv_model.add(layer) for l1,l2 in zip(conv_model.layers[last_conv_idx+1:], fc_model.layers): l1.set_weights(l2.get_weights()) conv_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) conv_model.save_weights(model_path+'no_dropout_bn' + i + '.h5') conv_model.fit_generator(batches, samples_per_epoch=batches.N, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.N) for layer in conv_model.layers[16:]: layer.trainable = True conv_model.fit_generator(batches, samples_per_epoch=batches.N, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.N) conv_model.optimizer.lr = 1e-7 conv_model.fit_generator(batches, samples_per_epoch=batches.N, nb_epoch=10, validation_data=val_batches, nb_val_samples=val_batches.N) conv_model.save_weights(model_path + 'aug' + i + '.h5') ###Output _____no_output_____ ###Markdown Build ensemble ###Code for i in range(5): i = str(i) model = train_last_layer(i) train_dense_layers(i, model) ###Output Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 0s - loss: 0.5184 - acc: 0.7895 - val_loss: 0.1549 - val_acc: 0.9440 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.1984 - acc: 0.9237 - val_loss: 0.0941 - val_acc: 0.9670 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1524 - acc: 0.9426 - val_loss: 0.0762 - val_acc: 0.9735 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1247 - acc: 0.9542 - val_loss: 0.0662 - val_acc: 0.9740 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1128 - acc: 0.9567 - val_loss: 0.0609 - val_acc: 0.9760 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1043 - acc: 0.9635 - val_loss: 0.0560 - val_acc: 0.9775 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.1010 - acc: 0.9640 - val_loss: 0.0548 - val_acc: 0.9790 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0980 - acc: 0.9650 - val_loss: 0.0526 - val_acc: 0.9780 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0926 - acc: 0.9656 - val_loss: 0.0513 - val_acc: 0.9785 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0881 - acc: 0.9680 - val_loss: 0.0500 - val_acc: 0.9795 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0933 - acc: 0.9666 - val_loss: 0.0497 - val_acc: 0.9800 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0842 - acc: 0.9693 - val_loss: 0.0484 - val_acc: 0.9805 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0824 - acc: 0.9696 - val_loss: 0.0486 - val_acc: 0.9805 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0798 - acc: 0.9719 - val_loss: 0.0500 - val_acc: 0.9830 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0415 - acc: 0.9853 - val_loss: 0.0551 - val_acc: 0.9840 Epoch 1/1 23000/23000 [==============================] - 271s - loss: 0.0559 - acc: 0.9814 - val_loss: 0.0578 - val_acc: 0.9825 Epoch 1/8 23000/23000 [==============================] - 271s - loss: 0.0515 - acc: 0.9834 - val_loss: 0.0645 - val_acc: 0.9860 Epoch 2/8 23000/23000 [==============================] - 271s - loss: 0.0385 - acc: 0.9875 - val_loss: 0.0670 - val_acc: 0.9850 Epoch 3/8 23000/23000 [==============================] - 271s - loss: 0.0313 - acc: 0.9890 - val_loss: 0.0715 - val_acc: 0.9850 Epoch 4/8 23000/23000 [==============================] - 271s - loss: 0.0287 - acc: 0.9903 - val_loss: 0.0733 - val_acc: 0.9840 Epoch 5/8 23000/23000 [==============================] - 271s - loss: 0.0244 - acc: 0.9924 - val_loss: 0.0773 - val_acc: 0.9840 Epoch 6/8 23000/23000 [==============================] - 271s - loss: 0.0205 - acc: 0.9927 - val_loss: 0.0900 - val_acc: 0.9845 Epoch 7/8 23000/23000 [==============================] - 271s - loss: 0.0209 - acc: 0.9929 - val_loss: 0.0860 - val_acc: 0.9865 Epoch 8/8 23000/23000 [==============================] - 420s - loss: 0.0186 - acc: 0.9930 - val_loss: 0.0923 - val_acc: 0.9845 Epoch 1/10 23000/23000 [==============================] - 315s - loss: 0.0196 - acc: 0.9930 - val_loss: 0.0909 - val_acc: 0.9845 Epoch 2/10 23000/23000 [==============================] - 362s - loss: 0.0165 - acc: 0.9945 - val_loss: 0.1023 - val_acc: 0.9830 Epoch 3/10 23000/23000 [==============================] - 447s - loss: 0.0179 - acc: 0.9940 - val_loss: 0.0871 - val_acc: 0.9845 Epoch 4/10 23000/23000 [==============================] - 601s - loss: 0.0112 - acc: 0.9960 - val_loss: 0.1030 - val_acc: 0.9830 Epoch 5/10 23000/23000 [==============================] - 528s - loss: 0.0130 - acc: 0.9956 - val_loss: 0.0946 - val_acc: 0.9830 Epoch 6/10 23000/23000 [==============================] - 657s - loss: 0.0110 - acc: 0.9961 - val_loss: 0.0904 - val_acc: 0.9850 Epoch 7/10 23000/23000 [==============================] - 621s - loss: 0.0116 - acc: 0.9963 - val_loss: 0.0872 - val_acc: 0.9865 Epoch 8/10 23000/23000 [==============================] - 603s - loss: 0.0118 - acc: 0.9960 - val_loss: 0.0813 - val_acc: 0.9870 Epoch 9/10 23000/23000 [==============================] - 616s - loss: 0.0100 - acc: 0.9967 - val_loss: 0.1053 - val_acc: 0.9835 Epoch 10/10 23000/23000 [==============================] - 661s - loss: 0.0098 - acc: 0.9968 - val_loss: 0.0970 - val_acc: 0.9840 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 0s - loss: 0.5106 - acc: 0.7935 - val_loss: 0.1504 - val_acc: 0.9455 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.2005 - acc: 0.9241 - val_loss: 0.0890 - val_acc: 0.9680 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1465 - acc: 0.9444 - val_loss: 0.0714 - val_acc: 0.9745 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1280 - acc: 0.9540 - val_loss: 0.0614 - val_acc: 0.9765 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1131 - acc: 0.9586 - val_loss: 0.0561 - val_acc: 0.9795 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1079 - acc: 0.9620 - val_loss: 0.0515 - val_acc: 0.9795 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.0998 - acc: 0.9631 - val_loss: 0.0484 - val_acc: 0.9825 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0947 - acc: 0.9673 - val_loss: 0.0457 - val_acc: 0.9845 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0913 - acc: 0.9676 - val_loss: 0.0449 - val_acc: 0.9855 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0921 - acc: 0.9670 - val_loss: 0.0451 - val_acc: 0.9845 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0893 - acc: 0.9681 - val_loss: 0.0441 - val_acc: 0.9840 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0836 - acc: 0.9691 - val_loss: 0.0428 - val_acc: 0.9850 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0833 - acc: 0.9718 - val_loss: 0.0434 - val_acc: 0.9850 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0814 - acc: 0.9736 - val_loss: 0.0463 - val_acc: 0.9850 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0389 - acc: 0.9859 - val_loss: 0.0493 - val_acc: 0.9850 Epoch 1/1 23000/23000 [==============================] - 271s - loss: 0.0613 - acc: 0.9807 - val_loss: 0.0563 - val_acc: 0.9855 Epoch 1/8 23000/23000 [==============================] - 325s - loss: 0.0450 - acc: 0.9860 - val_loss: 0.0685 - val_acc: 0.9840 Epoch 2/8 23000/23000 [==============================] - 766s - loss: 0.0364 - acc: 0.9877 - val_loss: 0.0616 - val_acc: 0.9845 Epoch 3/8 23000/23000 [==============================] - 600s - loss: 0.0338 - acc: 0.9891 - val_loss: 0.0585 - val_acc: 0.9845 Epoch 4/8 23000/23000 [==============================] - 634s - loss: 0.0288 - acc: 0.9903 - val_loss: 0.0740 - val_acc: 0.9845 Epoch 5/8 23000/23000 [==============================] - 791s - loss: 0.0265 - acc: 0.9904 - val_loss: 0.0789 - val_acc: 0.9840 Epoch 6/8 23000/23000 [==============================] - 780s - loss: 0.0254 - acc: 0.9909 - val_loss: 0.0853 - val_acc: 0.9855 Epoch 7/8 23000/23000 [==============================] - 680s - loss: 0.0180 - acc: 0.9937 - val_loss: 0.0747 - val_acc: 0.9870 Epoch 8/8 23000/23000 [==============================] - 776s - loss: 0.0191 - acc: 0.9939 - val_loss: 0.0871 - val_acc: 0.9845 Epoch 1/10 23000/23000 [==============================] - 712s - loss: 0.0191 - acc: 0.9929 - val_loss: 0.0943 - val_acc: 0.9855 Epoch 2/10 23000/23000 [==============================] - 679s - loss: 0.0175 - acc: 0.9946 - val_loss: 0.0723 - val_acc: 0.9850 Epoch 3/10 23000/23000 [==============================] - 640s - loss: 0.0148 - acc: 0.9949 - val_loss: 0.0756 - val_acc: 0.9845 Epoch 4/10 23000/23000 [==============================] - 761s - loss: 0.0147 - acc: 0.9953 - val_loss: 0.0772 - val_acc: 0.9850 Epoch 5/10 23000/23000 [==============================] - 733s - loss: 0.0163 - acc: 0.9946 - val_loss: 0.0931 - val_acc: 0.9830 Epoch 6/10 23000/23000 [==============================] - 574s - loss: 0.0107 - acc: 0.9967 - val_loss: 0.0874 - val_acc: 0.9845 Epoch 7/10 23000/23000 [==============================] - 611s - loss: 0.0123 - acc: 0.9958 - val_loss: 0.0918 - val_acc: 0.9855 Epoch 8/10 23000/23000 [==============================] - 668s - loss: 0.0098 - acc: 0.9965 - val_loss: 0.0896 - val_acc: 0.9855 Epoch 9/10 23000/23000 [==============================] - 624s - loss: 0.0096 - acc: 0.9964 - val_loss: 0.1012 - val_acc: 0.9850 Epoch 10/10 23000/23000 [==============================] - 747s - loss: 0.0113 - acc: 0.9960 - val_loss: 0.0961 - val_acc: 0.9835 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 1s - loss: 0.5167 - acc: 0.7867 - val_loss: 0.1299 - val_acc: 0.9555 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.1922 - acc: 0.9265 - val_loss: 0.0803 - val_acc: 0.9695 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1461 - acc: 0.9454 - val_loss: 0.0646 - val_acc: 0.9745 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1255 - acc: 0.9536 - val_loss: 0.0543 - val_acc: 0.9790 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1113 - acc: 0.9608 - val_loss: 0.0505 - val_acc: 0.9820 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1058 - acc: 0.9607 - val_loss: 0.0464 - val_acc: 0.9825 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.0957 - acc: 0.9654 - val_loss: 0.0448 - val_acc: 0.9840 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0964 - acc: 0.9657 - val_loss: 0.0427 - val_acc: 0.9850 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0996 - acc: 0.9662 - val_loss: 0.0420 - val_acc: 0.9860 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0931 - acc: 0.9670 - val_loss: 0.0408 - val_acc: 0.9855 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0899 - acc: 0.9680 - val_loss: 0.0395 - val_acc: 0.9860 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0837 - acc: 0.9717 - val_loss: 0.0390 - val_acc: 0.9860 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0861 - acc: 0.9703 - val_loss: 0.0391 - val_acc: 0.9865 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0796 - acc: 0.9735 - val_loss: 0.0382 - val_acc: 0.9855 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0353 - acc: 0.9874 - val_loss: 0.0364 - val_acc: 0.9880 Epoch 1/1 23000/23000 [==============================] - 271s - loss: 0.0622 - acc: 0.9802 - val_loss: 0.0490 - val_acc: 0.9870 Epoch 1/8 23000/23000 [==============================] - 773s - loss: 0.0426 - acc: 0.9856 - val_loss: 0.0442 - val_acc: 0.9885 Epoch 2/8 23000/23000 [==============================] - 774s - loss: 0.0394 - acc: 0.9864 - val_loss: 0.0501 - val_acc: 0.9885 Epoch 3/8 23000/23000 [==============================] - 687s - loss: 0.0329 - acc: 0.9881 - val_loss: 0.0500 - val_acc: 0.9875 Epoch 4/8 23000/23000 [==============================] - 655s - loss: 0.0292 - acc: 0.9900 - val_loss: 0.0535 - val_acc: 0.9870 Epoch 5/8 23000/23000 [==============================] - 791s - loss: 0.0268 - acc: 0.9914 - val_loss: 0.0605 - val_acc: 0.9855 Epoch 6/8 23000/23000 [==============================] - 789s - loss: 0.0208 - acc: 0.9926 - val_loss: 0.0591 - val_acc: 0.9850 Epoch 7/8 23000/23000 [==============================] - 798s - loss: 0.0191 - acc: 0.9931 - val_loss: 0.0638 - val_acc: 0.9860 Epoch 8/8 23000/23000 [==============================] - 708s - loss: 0.0192 - acc: 0.9932 - val_loss: 0.0597 - val_acc: 0.9850 Epoch 1/10 23000/23000 [==============================] - 606s - loss: 0.0178 - acc: 0.9942 - val_loss: 0.0620 - val_acc: 0.9860 Epoch 2/10 23000/23000 [==============================] - 756s - loss: 0.0158 - acc: 0.9941 - val_loss: 0.0694 - val_acc: 0.9850 Epoch 3/10 23000/23000 [==============================] - 418s - loss: 0.0176 - acc: 0.9939 - val_loss: 0.0641 - val_acc: 0.9855 Epoch 4/10 23000/23000 [==============================] - 271s - loss: 0.0118 - acc: 0.9958 - val_loss: 0.0623 - val_acc: 0.9840 Epoch 5/10 23000/23000 [==============================] - 271s - loss: 0.0150 - acc: 0.9947 - val_loss: 0.0649 - val_acc: 0.9865 Epoch 6/10 23000/23000 [==============================] - 271s - loss: 0.0119 - acc: 0.9961 - val_loss: 0.0595 - val_acc: 0.9880 Epoch 7/10 23000/23000 [==============================] - 304s - loss: 0.0121 - acc: 0.9957 - val_loss: 0.0668 - val_acc: 0.9885 Epoch 8/10 23000/23000 [==============================] - 273s - loss: 0.0124 - acc: 0.9960 - val_loss: 0.0619 - val_acc: 0.9885 Epoch 9/10 23000/23000 [==============================] - 271s - loss: 0.0099 - acc: 0.9963 - val_loss: 0.0649 - val_acc: 0.9865 Epoch 10/10 23000/23000 [==============================] - 273s - loss: 0.0091 - acc: 0.9970 - val_loss: 0.0628 - val_acc: 0.9890 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 0s - loss: 0.4585 - acc: 0.8130 - val_loss: 0.1306 - val_acc: 0.9515 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.1920 - acc: 0.9270 - val_loss: 0.0863 - val_acc: 0.9655 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1504 - acc: 0.9450 - val_loss: 0.0705 - val_acc: 0.9740 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1275 - acc: 0.9529 - val_loss: 0.0592 - val_acc: 0.9795 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1190 - acc: 0.9555 - val_loss: 0.0555 - val_acc: 0.9815 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1068 - acc: 0.9609 - val_loss: 0.0536 - val_acc: 0.9805 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.1003 - acc: 0.9624 - val_loss: 0.0496 - val_acc: 0.9830 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0979 - acc: 0.9660 - val_loss: 0.0482 - val_acc: 0.9825 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0913 - acc: 0.9678 - val_loss: 0.0475 - val_acc: 0.9830 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0917 - acc: 0.9666 - val_loss: 0.0458 - val_acc: 0.9825 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0980 - acc: 0.9665 - val_loss: 0.0454 - val_acc: 0.9840 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0919 - acc: 0.9675 - val_loss: 0.0443 - val_acc: 0.9840 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0883 - acc: 0.9685 - val_loss: 0.0440 - val_acc: 0.9850 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0825 - acc: 0.9720 - val_loss: 0.0437 - val_acc: 0.9850 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0359 - acc: 0.9874 - val_loss: 0.0474 - val_acc: 0.9850 Epoch 1/1 23000/23000 [==============================] - 272s - loss: 0.0581 - acc: 0.9817 - val_loss: 0.0562 - val_acc: 0.9850 Epoch 1/8 23000/23000 [==============================] - 520s - loss: 0.0486 - acc: 0.9833 - val_loss: 0.0590 - val_acc: 0.9830 Epoch 2/8 23000/23000 [==============================] - 745s - loss: 0.0379 - acc: 0.9867 - val_loss: 0.0595 - val_acc: 0.9840 Epoch 3/8 23000/23000 [==============================] - 736s - loss: 0.0329 - acc: 0.9881 - val_loss: 0.0628 - val_acc: 0.9840 Epoch 4/8 23000/23000 [==============================] - 708s - loss: 0.0260 - acc: 0.9903 - val_loss: 0.0722 - val_acc: 0.9855 Epoch 5/8 23000/23000 [==============================] - 700s - loss: 0.0250 - acc: 0.9921 - val_loss: 0.0734 - val_acc: 0.9840 Epoch 6/8 23000/23000 [==============================] - 802s - loss: 0.0212 - acc: 0.9923 - val_loss: 0.0721 - val_acc: 0.9845 Epoch 7/8 23000/23000 [==============================] - 765s - loss: 0.0211 - acc: 0.9928 - val_loss: 0.0772 - val_acc: 0.9835 Epoch 8/8 23000/23000 [==============================] - 743s - loss: 0.0185 - acc: 0.9933 - val_loss: 0.0756 - val_acc: 0.9835 Epoch 1/10 23000/23000 [==============================] - 782s - loss: 0.0168 - acc: 0.9941 - val_loss: 0.0815 - val_acc: 0.9860 Epoch 2/10 23000/23000 [==============================] - 580s - loss: 0.0155 - acc: 0.9942 - val_loss: 0.0771 - val_acc: 0.9840 Epoch 3/10 23000/23000 [==============================] - 654s - loss: 0.0142 - acc: 0.9954 - val_loss: 0.0789 - val_acc: 0.9850 Epoch 4/10 23000/23000 [==============================] - 692s - loss: 0.0141 - acc: 0.9955 - val_loss: 0.0716 - val_acc: 0.9870 Epoch 5/10 23000/23000 [==============================] - 607s - loss: 0.0120 - acc: 0.9959 - val_loss: 0.0757 - val_acc: 0.9850 Epoch 6/10 23000/23000 [==============================] - 789s - loss: 0.0129 - acc: 0.9956 - val_loss: 0.0741 - val_acc: 0.9860 Epoch 7/10 23000/23000 [==============================] - 767s - loss: 0.0111 - acc: 0.9960 - val_loss: 0.0747 - val_acc: 0.9865 Epoch 8/10 23000/23000 [==============================] - 557s - loss: 0.0103 - acc: 0.9967 - val_loss: 0.0774 - val_acc: 0.9870 Epoch 9/10 23000/23000 [==============================] - 521s - loss: 0.0106 - acc: 0.9962 - val_loss: 0.0855 - val_acc: 0.9855 Epoch 10/10 23000/23000 [==============================] - 484s - loss: 0.0095 - acc: 0.9970 - val_loss: 0.0780 - val_acc: 0.9850 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 0s - loss: 0.5435 - acc: 0.7783 - val_loss: 0.1669 - val_acc: 0.9440 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.2054 - acc: 0.9227 - val_loss: 0.0999 - val_acc: 0.9675 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1549 - acc: 0.9405 - val_loss: 0.0763 - val_acc: 0.9725 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1327 - acc: 0.9520 - val_loss: 0.0642 - val_acc: 0.9755 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1147 - acc: 0.9573 - val_loss: 0.0590 - val_acc: 0.9790 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1078 - acc: 0.9605 - val_loss: 0.0545 - val_acc: 0.9815 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.1001 - acc: 0.9631 - val_loss: 0.0526 - val_acc: 0.9820 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0977 - acc: 0.9654 - val_loss: 0.0515 - val_acc: 0.9815 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0937 - acc: 0.9660 - val_loss: 0.0497 - val_acc: 0.9825 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0942 - acc: 0.9683 - val_loss: 0.0489 - val_acc: 0.9835 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0904 - acc: 0.9687 - val_loss: 0.0473 - val_acc: 0.9830 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0855 - acc: 0.9689 - val_loss: 0.0469 - val_acc: 0.9840 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0861 - acc: 0.9685 - val_loss: 0.0470 - val_acc: 0.9840 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0846 - acc: 0.9719 - val_loss: 0.0510 - val_acc: 0.9845 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0392 - acc: 0.9866 - val_loss: 0.0548 - val_acc: 0.9860 Epoch 1/1 23000/23000 [==============================] - 273s - loss: 0.0585 - acc: 0.9800 - val_loss: 0.0608 - val_acc: 0.9875 Epoch 1/8 23000/23000 [==============================] - 677s - loss: 0.0456 - acc: 0.9845 - val_loss: 0.0690 - val_acc: 0.9840 Epoch 2/8 23000/23000 [==============================] - 654s - loss: 0.0398 - acc: 0.9859 - val_loss: 0.0763 - val_acc: 0.9835 Epoch 3/8 23000/23000 [==============================] - 711s - loss: 0.0304 - acc: 0.9894 - val_loss: 0.0662 - val_acc: 0.9840 Epoch 4/8 23000/23000 [==============================] - 646s - loss: 0.0252 - acc: 0.9913 - val_loss: 0.0747 - val_acc: 0.9845 Epoch 5/8 23000/23000 [==============================] - 726s - loss: 0.0246 - acc: 0.9909 - val_loss: 0.0809 - val_acc: 0.9850 Epoch 6/8 23000/23000 [==============================] - 582s - loss: 0.0182 - acc: 0.9933 - val_loss: 0.0715 - val_acc: 0.9850 Epoch 7/8 23000/23000 [==============================] - 627s - loss: 0.0201 - acc: 0.9928 - val_loss: 0.0789 - val_acc: 0.9850 Epoch 8/8 23000/23000 [==============================] - 674s - loss: 0.0172 - acc: 0.9944 - val_loss: 0.0717 - val_acc: 0.9855 Epoch 1/10 23000/23000 [==============================] - 736s - loss: 0.0171 - acc: 0.9939 - val_loss: 0.0820 - val_acc: 0.9850 Epoch 2/10 23000/23000 [==============================] - 634s - loss: 0.0184 - acc: 0.9941 - val_loss: 0.0829 - val_acc: 0.9860 Epoch 3/10 23000/23000 [==============================] - 599s - loss: 0.0156 - acc: 0.9946 - val_loss: 0.0863 - val_acc: 0.9865 Epoch 4/10 23000/23000 [==============================] - 717s - loss: 0.0142 - acc: 0.9952 - val_loss: 0.0903 - val_acc: 0.9850 Epoch 5/10 23000/23000 [==============================] - 809s - loss: 0.0116 - acc: 0.9960 - val_loss: 0.0883 - val_acc: 0.9860 Epoch 6/10 23000/23000 [==============================] - 754s - loss: 0.0127 - acc: 0.9953 - val_loss: 0.0887 - val_acc: 0.9855 Epoch 7/10 23000/23000 [==============================] - 499s - loss: 0.0100 - acc: 0.9964 - val_loss: 0.0835 - val_acc: 0.9850 Epoch 8/10 23000/23000 [==============================] - 317s - loss: 0.0090 - acc: 0.9971 - val_loss: 0.0804 - val_acc: 0.9870 Epoch 9/10 23000/23000 [==============================] - 301s - loss: 0.0111 - acc: 0.9963 - val_loss: 0.0869 - val_acc: 0.9865 Epoch 10/10 23000/23000 [==============================] - 442s - loss: 0.0079 - acc: 0.9971 - val_loss: 0.0805 - val_acc: 0.9870 ###Markdown Combine ensemble and test ###Code ens_model = vgg_ft(2) for layer in ens_model.layers: layer.trainable=True def get_ens_pred(arr, fname): ens_pred = [] for i in range(5): i = str(i) ens_model.load_weights('{}{}{}.h5'.format(model_path, fname, i)) preds = ens_model.predict(arr, batch_size=batch_size) ens_pred.append(preds) return ens_pred val_pred2 = get_ens_pred(val, 'aug') val_avg_preds2 = np.stack(val_pred2).mean(axis=0) categorical_accuracy(val_labels, val_avg_preds2).eval() ###Output _____no_output_____ ###Markdown Setup ###Code path = "data/dogscats/" # path = "data/dogscats/sample/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=128 # batch_size=1 batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. Found 12500 images belonging to 1 classes. ###Markdown In this notebook we're going to create an ensemble of models and use their average as our predictions. For each ensemble, we're going to follow our usual fine-tuning steps:1) Create a model that retrains just the last layer2) Add this to a model containing all VGG layers except the last layer3) Fine-tune just the dense layers of this model (pre-computing the convolutional layers)4) Add data augmentation, fine-tuning the dense layers without pre-computation.So first, we need to create our VGG model and pre-compute the output of the conv layers: ###Code model = Vgg16().model conv_layers,fc_layers = split_at(model, Convolution2D) conv_model = Sequential(conv_layers) val_features = conv_model.predict_generator(val_batches, int(np.ceil(val_batches.samples/batch_size))) trn_features = conv_model.predict_generator(batches, int(np.ceil(batches.samples/batch_size))) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) ###Output _____no_output_____ ###Markdown In the future we can just load these precomputed features: ###Code trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') ###Output _____no_output_____ ###Markdown We can also save some time by pre-computing the training and validation arrays with the image decoding and resizing already done: ###Code trn = get_data(path+'train') val = get_data(path+'valid') save_array(model_path+'train_data.bc', trn) save_array(model_path+'valid_data.bc', val) ###Output _____no_output_____ ###Markdown In the future we can just load these resized images: ###Code trn = load_array(model_path+'train_data.bc') val = load_array(model_path+'valid_data.bc') ###Output _____no_output_____ ###Markdown Finally, we can precompute the output of all but the last dropout and dense layers, for creating the first stage of the model: ###Code model.pop() model.pop() ll_val_feat = model.predict_generator(val_batches, int(np.ceil(val_batches.samples/batch_size))) ll_feat = model.predict_generator(batches, int(np.ceil(batches.samples/batch_size))) save_array(model_path + 'train_ll_feat.bc', ll_feat) save_array(model_path + 'valid_ll_feat.bc', ll_val_feat) ll_feat = load_array(model_path+ 'train_ll_feat.bc') ll_val_feat = load_array(model_path + 'valid_ll_feat.bc') ###Output _____no_output_____ ###Markdown ...and let's also grab the test data, for when we need to submit: ###Code test = get_data(path+'test') save_array(model_path+'test_data.bc', test) test = load_array(model_path+'test_data.bc') ###Output _____no_output_____ ###Markdown Last layer The functions automate creating a model that trains the last layer from scratch, and then adds those new layers on to the main model. ###Code def get_ll_layers(): return [ BatchNormalization(input_shape=(4096,)), Dropout(0.5), Dense(2, activation='softmax') ] def train_last_layer(i): ll_layers = get_ll_layers() ll_model = Sequential(ll_layers) ll_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) ll_model.optimizer.lr=1e-5 ll_model.fit(ll_feat, trn_labels, validation_data=(ll_val_feat, val_labels), epochs=12) ll_model.optimizer.lr=1e-7 ll_model.fit(ll_feat, trn_labels, validation_data=(ll_val_feat, val_labels), epochs=1) ll_model.save_weights(model_path+'ll_bn' + i + '.h5') vgg = Vgg16BN() model = vgg.model model.pop(); model.pop(); model.pop() for layer in model.layers: layer.trainable=False model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) ll_layers = get_ll_layers() for layer in ll_layers: model.add(layer) for l1,l2 in zip(ll_model.layers, model.layers[-3:]): l2.set_weights(l1.get_weights()) model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) model.save_weights(model_path+'bn' + i + '.h5') return model ###Output _____no_output_____ ###Markdown Dense model ###Code def get_conv_model(model): layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) fc_layers = layers[last_conv_idx+1:] return conv_model, fc_layers, last_conv_idx def get_fc_layers(p, in_shape): return [ MaxPooling2D(input_shape=in_shape), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(2, activation='softmax') ] def train_dense_layers(i, model): conv_model, fc_layers, last_conv_idx = get_conv_model(model) conv_shape = conv_model.output_shape[1:] fc_model = Sequential(get_fc_layers(0.5, conv_shape)) for l1,l2 in zip(fc_model.layers, fc_layers): weights = l2.get_weights() l1.set_weights(weights) fc_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) fc_model.fit(trn_features, trn_labels, epochs=2, batch_size=batch_size, validation_data=(val_features, val_labels)) # width_zoom_range removed from the following because not available in Keras2 gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.05, zoom_range=0.05, channel_shift_range=10, height_shift_range=0.05, shear_range=0.05, horizontal_flip=True) batches = gen.flow(trn, trn_labels, batch_size=batch_size) val_batches = image.ImageDataGenerator().flow(val, val_labels, shuffle=False, batch_size=batch_size) for layer in conv_model.layers: layer.trainable = False for layer in get_fc_layers(0.5, conv_shape): conv_model.add(layer) for l1,l2 in zip(conv_model.layers[last_conv_idx+1:], fc_model.layers): l1.set_weights(l2.get_weights()) steps_per_epoch = int(np.ceil(batches.n/batch_size)) validation_steps = int(np.ceil(val_batches.n/batch_size)) conv_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) conv_model.save_weights(model_path+'no_dropout_bn' + i + '.h5') conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1, validation_data=val_batches, validation_steps=validation_steps) for layer in conv_model.layers[16:]: layer.trainable = True conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=8, validation_data=val_batches, validation_steps=validation_steps) conv_model.optimizer.lr = 1e-7 conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=10, validation_data=val_batches, validation_steps=validation_steps) conv_model.save_weights(model_path + 'aug' + i + '.h5') ###Output _____no_output_____ ###Markdown Build ensemble ###Code for i in range(5): i = str(i) model = train_last_layer(i) train_dense_layers(i, model) ###Output Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 1s - loss: 0.8578 - acc: 0.6804 - val_loss: 2.2534 - val_acc: 0.3120 Epoch 2/12 23000/23000 [==============================] - 1s - loss: 0.6839 - acc: 0.7794 - val_loss: 2.4996 - val_acc: 0.2990 Epoch 3/12 23000/23000 [==============================] - 1s - loss: 0.6496 - acc: 0.7960 - val_loss: 2.4651 - val_acc: 0.2970 Epoch 4/12 23000/23000 [==============================] - 1s - loss: 0.6391 - acc: 0.7963 - val_loss: 2.4541 - val_acc: 0.2965 Epoch 5/12 23000/23000 [==============================] - 1s - loss: 0.6348 - acc: 0.8000 - val_loss: 2.4300 - val_acc: 0.2945 Epoch 6/12 23000/23000 [==============================] - 1s - loss: 0.6146 - acc: 0.7994 - val_loss: 2.3874 - val_acc: 0.2950 Epoch 7/12 23000/23000 [==============================] - 1s - loss: 0.6045 - acc: 0.8013 - val_loss: 2.3943 - val_acc: 0.2920 Epoch 8/12 23000/23000 [==============================] - 1s - loss: 0.5854 - acc: 0.8081 - val_loss: 2.3667 - val_acc: 0.2930 Epoch 9/12 23000/23000 [==============================] - 1s - loss: 0.5955 - acc: 0.8052 - val_loss: 2.3389 - val_acc: 0.2930 Epoch 10/12 23000/23000 [==============================] - 1s - loss: 0.5776 - acc: 0.8110 - val_loss: 2.2973 - val_acc: 0.2920 Epoch 11/12 23000/23000 [==============================] - 1s - loss: 0.5655 - acc: 0.8106 - val_loss: 2.2377 - val_acc: 0.2935 Epoch 12/12 23000/23000 [==============================] - 1s - loss: 0.5638 - acc: 0.8119 - val_loss: 2.2357 - val_acc: 0.2895 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 1s - loss: 0.5568 - acc: 0.8107 - val_loss: 2.2523 - val_acc: 0.2910 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 9s - loss: 0.0881 - acc: 0.9671 - val_loss: 0.0441 - val_acc: 0.9855 Epoch 2/2 23000/23000 [==============================] - 9s - loss: 0.0414 - acc: 0.9862 - val_loss: 0.0419 - val_acc: 0.9850 Epoch 1/1 180/180 [==============================] - 194s - loss: 0.0536 - acc: 0.9815 - val_loss: 0.0425 - val_acc: 0.9850 Epoch 1/8 180/180 [==============================] - 191s - loss: 0.0429 - acc: 0.9835 - val_loss: 0.0487 - val_acc: 0.9830 Epoch 2/8 180/180 [==============================] - 189s - loss: 0.0344 - acc: 0.9876 - val_loss: 0.0471 - val_acc: 0.9860 Epoch 3/8 180/180 [==============================] - 188s - loss: 0.0242 - acc: 0.9913 - val_loss: 0.0460 - val_acc: 0.9855 Epoch 4/8 180/180 [==============================] - 188s - loss: 0.0293 - acc: 0.9881 - val_loss: 0.0475 - val_acc: 0.9845 Epoch 5/8 180/180 [==============================] - 188s - loss: 0.0209 - acc: 0.9923 - val_loss: 0.0500 - val_acc: 0.9840 Epoch 6/8 180/180 [==============================] - 188s - loss: 0.0166 - acc: 0.9941 - val_loss: 0.0509 - val_acc: 0.9850 Epoch 7/8 180/180 [==============================] - 188s - loss: 0.0147 - acc: 0.9949 - val_loss: 0.0517 - val_acc: 0.9835 Epoch 8/8 180/180 [==============================] - 188s - loss: 0.0152 - acc: 0.9943 - val_loss: 0.0535 - val_acc: 0.9840 Epoch 1/10 180/180 [==============================] - 189s - loss: 0.0130 - acc: 0.9953 - val_loss: 0.0537 - val_acc: 0.9835 Epoch 2/10 180/180 [==============================] - 188s - loss: 0.0141 - acc: 0.9949 - val_loss: 0.0550 - val_acc: 0.9840 Epoch 3/10 180/180 [==============================] - 188s - loss: 0.0111 - acc: 0.9961 - val_loss: 0.0555 - val_acc: 0.9855 Epoch 4/10 180/180 [==============================] - 188s - loss: 0.0109 - acc: 0.9958 - val_loss: 0.0581 - val_acc: 0.9840 Epoch 5/10 180/180 [==============================] - 188s - loss: 0.0080 - acc: 0.9968 - val_loss: 0.0632 - val_acc: 0.9845 Epoch 6/10 180/180 [==============================] - 189s - loss: 0.0088 - acc: 0.9964 - val_loss: 0.0583 - val_acc: 0.9850 Epoch 7/10 180/180 [==============================] - 189s - loss: 0.0071 - acc: 0.9975 - val_loss: 0.0610 - val_acc: 0.9840 Epoch 8/10 180/180 [==============================] - 189s - loss: 0.0077 - acc: 0.9977 - val_loss: 0.0571 - val_acc: 0.9855 Epoch 9/10 180/180 [==============================] - 188s - loss: 0.0057 - acc: 0.9978 - val_loss: 0.0604 - val_acc: 0.9845 Epoch 10/10 180/180 [==============================] - 189s - loss: 0.0058 - acc: 0.9977 - val_loss: 0.0643 - val_acc: 0.9855 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 1s - loss: 0.8303 - acc: 0.6877 - val_loss: 2.2061 - val_acc: 0.3115 Epoch 2/12 23000/23000 [==============================] - 1s - loss: 0.6640 - acc: 0.7842 - val_loss: 2.4278 - val_acc: 0.3025 Epoch 3/12 23000/23000 [==============================] - 1s - loss: 0.6406 - acc: 0.7919 - val_loss: 2.4346 - val_acc: 0.2975 Epoch 4/12 23000/23000 [==============================] - 1s - loss: 0.6319 - acc: 0.7983 - val_loss: 2.4229 - val_acc: 0.2950 Epoch 5/12 23000/23000 [==============================] - 1s - loss: 0.6217 - acc: 0.8004 - val_loss: 2.3945 - val_acc: 0.2945 Epoch 6/12 23000/23000 [==============================] - 1s - loss: 0.6077 - acc: 0.8027 - val_loss: 2.3483 - val_acc: 0.2935 Epoch 7/12 23000/23000 [==============================] - 1s - loss: 0.6033 - acc: 0.8017 - val_loss: 2.3561 - val_acc: 0.2930 Epoch 8/12 23000/23000 [==============================] - 1s - loss: 0.5905 - acc: 0.8072 - val_loss: 2.3143 - val_acc: 0.2910 Epoch 9/12 23000/23000 [==============================] - 1s - loss: 0.5934 - acc: 0.8039 - val_loss: 2.2917 - val_acc: 0.2880 Epoch 10/12 23000/23000 [==============================] - 1s - loss: 0.5747 - acc: 0.8090 - val_loss: 2.3198 - val_acc: 0.2890 Epoch 11/12 23000/23000 [==============================] - 1s - loss: 0.5741 - acc: 0.8102 - val_loss: 2.2705 - val_acc: 0.2875 Epoch 12/12 23000/23000 [==============================] - 1s - loss: 0.5653 - acc: 0.8133 - val_loss: 2.2241 - val_acc: 0.2875 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 1s - loss: 0.5566 - acc: 0.8143 - val_loss: 2.2264 - val_acc: 0.2890 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 9s - loss: 0.0887 - acc: 0.9655 - val_loss: 0.0397 - val_acc: 0.9845 Epoch 2/2 23000/23000 [==============================] - 9s - loss: 0.0392 - acc: 0.9859 - val_loss: 0.0395 - val_acc: 0.9875 Epoch 1/1 180/180 [==============================] - 195s - loss: 0.0522 - acc: 0.9817 - val_loss: 0.0386 - val_acc: 0.9875 Epoch 1/8 180/180 [==============================] - 192s - loss: 0.0397 - acc: 0.9866 - val_loss: 0.0397 - val_acc: 0.9875 Epoch 2/8 180/180 [==============================] - 190s - loss: 0.0338 - acc: 0.9882 - val_loss: 0.0425 - val_acc: 0.9865 Epoch 3/8 180/180 [==============================] - 190s - loss: 0.0278 - acc: 0.9893 - val_loss: 0.0424 - val_acc: 0.9855 Epoch 4/8 180/180 [==============================] - 190s - loss: 0.0252 - acc: 0.9912 - val_loss: 0.0437 - val_acc: 0.9860 Epoch 5/8 180/180 [==============================] - 190s - loss: 0.0223 - acc: 0.9926 - val_loss: 0.0418 - val_acc: 0.9845 Epoch 6/8 180/180 [==============================] - 189s - loss: 0.0176 - acc: 0.9936 - val_loss: 0.0448 - val_acc: 0.9845 Epoch 7/8 180/180 [==============================] - 190s - loss: 0.0164 - acc: 0.9941 - val_loss: 0.0456 - val_acc: 0.9860 Epoch 8/8 180/180 [==============================] - 190s - loss: 0.0140 - acc: 0.9950 - val_loss: 0.0479 - val_acc: 0.9855 Epoch 1/10 180/180 [==============================] - 190s - loss: 0.0130 - acc: 0.9957 - val_loss: 0.0510 - val_acc: 0.9835 Epoch 2/10 180/180 [==============================] - 190s - loss: 0.0111 - acc: 0.9962 - val_loss: 0.0510 - val_acc: 0.9850 Epoch 3/10 180/180 [==============================] - 190s - loss: 0.0117 - acc: 0.9962 - val_loss: 0.0485 - val_acc: 0.9870 Epoch 4/10 180/180 [==============================] - 190s - loss: 0.0120 - acc: 0.9963 - val_loss: 0.0450 - val_acc: 0.9890 Epoch 5/10 180/180 [==============================] - 190s - loss: 0.0082 - acc: 0.9973 - val_loss: 0.0459 - val_acc: 0.9865 Epoch 6/10 180/180 [==============================] - 190s - loss: 0.0084 - acc: 0.9971 - val_loss: 0.0535 - val_acc: 0.9855 Epoch 7/10 180/180 [==============================] - 190s - loss: 0.0066 - acc: 0.9976 - val_loss: 0.0523 - val_acc: 0.9860 Epoch 8/10 180/180 [==============================] - 190s - loss: 0.0073 - acc: 0.9976 - val_loss: 0.0535 - val_acc: 0.9855 Epoch 9/10 180/180 [==============================] - 190s - loss: 0.0077 - acc: 0.9971 - val_loss: 0.0560 - val_acc: 0.9855 Epoch 10/10 180/180 [==============================] - 190s - loss: 0.0084 - acc: 0.9976 - val_loss: 0.0556 - val_acc: 0.9855 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 1s - loss: 0.8412 - acc: 0.6823 - val_loss: 2.1991 - val_acc: 0.3060 Epoch 2/12 23000/23000 [==============================] - 1s - loss: 0.6874 - acc: 0.7768 - val_loss: 2.4586 - val_acc: 0.3010 Epoch 3/12 23000/23000 [==============================] - 1s - loss: 0.6574 - acc: 0.7957 - val_loss: 2.4415 - val_acc: 0.2980 Epoch 4/12 23000/23000 [==============================] - 1s - loss: 0.6379 - acc: 0.7957 - val_loss: 2.4440 - val_acc: 0.2935 Epoch 5/12 23000/23000 [==============================] - 1s - loss: 0.6262 - acc: 0.7996 - val_loss: 2.4113 - val_acc: 0.2910 Epoch 6/12 23000/23000 [==============================] - 1s - loss: 0.6145 - acc: 0.8032 - val_loss: 2.3883 - val_acc: 0.2895 Epoch 7/12 23000/23000 [==============================] - 1s - loss: 0.6072 - acc: 0.8072 - val_loss: 2.3196 - val_acc: 0.2895 Epoch 8/12 23000/23000 [==============================] - 1s - loss: 0.6018 - acc: 0.8054 - val_loss: 2.3292 - val_acc: 0.2905 Epoch 9/12 23000/23000 [==============================] - 1s - loss: 0.5941 - acc: 0.8071 - val_loss: 2.2962 - val_acc: 0.2895 Epoch 10/12 23000/23000 [==============================] - 1s - loss: 0.5850 - acc: 0.8074 - val_loss: 2.2776 - val_acc: 0.2890 Epoch 11/12 23000/23000 [==============================] - 1s - loss: 0.5698 - acc: 0.8134 - val_loss: 2.2468 - val_acc: 0.2885 Epoch 12/12 23000/23000 [==============================] - 1s - loss: 0.5631 - acc: 0.8119 - val_loss: 2.2374 - val_acc: 0.2890 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 1s - loss: 0.5538 - acc: 0.8135 - val_loss: 2.2233 - val_acc: 0.2895 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 9s - loss: 0.0885 - acc: 0.9681 - val_loss: 0.0356 - val_acc: 0.9875 Epoch 2/2 23000/23000 [==============================] - 9s - loss: 0.0411 - acc: 0.9853 - val_loss: 0.0365 - val_acc: 0.9870 Epoch 1/1 180/180 [==============================] - 195s - loss: 0.0515 - acc: 0.9824 - val_loss: 0.0356 - val_acc: 0.9865 Epoch 1/8 180/180 [==============================] - 192s - loss: 0.0361 - acc: 0.9864 - val_loss: 0.0382 - val_acc: 0.9855 Epoch 2/8 180/180 [==============================] - 190s - loss: 0.0331 - acc: 0.9879 - val_loss: 0.0393 - val_acc: 0.9865 Epoch 3/8 180/180 [==============================] - 190s - loss: 0.0288 - acc: 0.9895 - val_loss: 0.0387 - val_acc: 0.9865 Epoch 4/8 180/180 [==============================] - 190s - loss: 0.0266 - acc: 0.9904 - val_loss: 0.0412 - val_acc: 0.9875 Epoch 5/8 180/180 [==============================] - 190s - loss: 0.0198 - acc: 0.9929 - val_loss: 0.0419 - val_acc: 0.9870 Epoch 6/8 180/180 [==============================] - 190s - loss: 0.0166 - acc: 0.9936 - val_loss: 0.0421 - val_acc: 0.9865 Epoch 7/8 180/180 [==============================] - 190s - loss: 0.0139 - acc: 0.9947 - val_loss: 0.0426 - val_acc: 0.9880 Epoch 8/8 180/180 [==============================] - 190s - loss: 0.0125 - acc: 0.9955 - val_loss: 0.0447 - val_acc: 0.9890 Epoch 1/10 180/180 [==============================] - 190s - loss: 0.0147 - acc: 0.9948 - val_loss: 0.0465 - val_acc: 0.9880 Epoch 2/10 180/180 [==============================] - 190s - loss: 0.0120 - acc: 0.9956 - val_loss: 0.0505 - val_acc: 0.9870 Epoch 3/10 180/180 [==============================] - 190s - loss: 0.0103 - acc: 0.9962 - val_loss: 0.0509 - val_acc: 0.9875 Epoch 4/10 180/180 [==============================] - 189s - loss: 0.0106 - acc: 0.9962 - val_loss: 0.0502 - val_acc: 0.9875 Epoch 5/10 180/180 [==============================] - 190s - loss: 0.0079 - acc: 0.9970 - val_loss: 0.0515 - val_acc: 0.9870 Epoch 6/10 180/180 [==============================] - 189s - loss: 0.0073 - acc: 0.9977 - val_loss: 0.0518 - val_acc: 0.9880 Epoch 7/10 180/180 [==============================] - 189s - loss: 0.0070 - acc: 0.9972 - val_loss: 0.0485 - val_acc: 0.9865 Epoch 8/10 180/180 [==============================] - 189s - loss: 0.0065 - acc: 0.9975 - val_loss: 0.0546 - val_acc: 0.9860 Epoch 9/10 180/180 [==============================] - 189s - loss: 0.0062 - acc: 0.9978 - val_loss: 0.0551 - val_acc: 0.9855 Epoch 10/10 180/180 [==============================] - 189s - loss: 0.0067 - acc: 0.9975 - val_loss: 0.0572 - val_acc: 0.9875 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 1s - loss: 0.8589 - acc: 0.6734 - val_loss: 2.1607 - val_acc: 0.3215 Epoch 2/12 23000/23000 [==============================] - 1s - loss: 0.6750 - acc: 0.7772 - val_loss: 2.4255 - val_acc: 0.3075 Epoch 3/12 23000/23000 [==============================] - 1s - loss: 0.6531 - acc: 0.7947 - val_loss: 2.4741 - val_acc: 0.3050 Epoch 4/12 23000/23000 [==============================] - 1s - loss: 0.6389 - acc: 0.7968 - val_loss: 2.4259 - val_acc: 0.3015 Epoch 5/12 23000/23000 [==============================] - 1s - loss: 0.6368 - acc: 0.7971 - val_loss: 2.4019 - val_acc: 0.3000 Epoch 6/12 23000/23000 [==============================] - 1s - loss: 0.6133 - acc: 0.8034 - val_loss: 2.4132 - val_acc: 0.2975 Epoch 7/12 23000/23000 [==============================] - 1s - loss: 0.6134 - acc: 0.8041 - val_loss: 2.3946 - val_acc: 0.2965 Epoch 8/12 23000/23000 [==============================] - 1s - loss: 0.5964 - acc: 0.8073 - val_loss: 2.3359 - val_acc: 0.2970 Epoch 9/12 23000/23000 [==============================] - 1s - loss: 0.5894 - acc: 0.8072 - val_loss: 2.2916 - val_acc: 0.2965 Epoch 10/12 23000/23000 [==============================] - 1s - loss: 0.5713 - acc: 0.8126 - val_loss: 2.3110 - val_acc: 0.2960 Epoch 11/12 23000/23000 [==============================] - 1s - loss: 0.5836 - acc: 0.8102 - val_loss: 2.2848 - val_acc: 0.2935 Epoch 12/12 23000/23000 [==============================] - 1s - loss: 0.5640 - acc: 0.8156 - val_loss: 2.2374 - val_acc: 0.2940 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 1s - loss: 0.5658 - acc: 0.8117 - val_loss: 2.2397 - val_acc: 0.2935 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 9s - loss: 0.0854 - acc: 0.9671 - val_loss: 0.0485 - val_acc: 0.9820 Epoch 2/2 23000/23000 [==============================] - 9s - loss: 0.0373 - acc: 0.9875 - val_loss: 0.0438 - val_acc: 0.9845 Epoch 1/1 180/180 [==============================] - 195s - loss: 0.0497 - acc: 0.9823 - val_loss: 0.0421 - val_acc: 0.9855 Epoch 1/8 180/180 [==============================] - 192s - loss: 0.0429 - acc: 0.9853 - val_loss: 0.0445 - val_acc: 0.9870 Epoch 2/8 180/180 [==============================] - 191s - loss: 0.0324 - acc: 0.9883 - val_loss: 0.0470 - val_acc: 0.9855 Epoch 3/8 180/180 [==============================] - 190s - loss: 0.0309 - acc: 0.9896 - val_loss: 0.0489 - val_acc: 0.9845 Epoch 4/8 180/180 [==============================] - 190s - loss: 0.0222 - acc: 0.9925 - val_loss: 0.0469 - val_acc: 0.9855 Epoch 5/8 180/180 [==============================] - 190s - loss: 0.0210 - acc: 0.9929 - val_loss: 0.0482 - val_acc: 0.9850 Epoch 6/8 180/180 [==============================] - 190s - loss: 0.0189 - acc: 0.9932 - val_loss: 0.0478 - val_acc: 0.9850 Epoch 7/8 ###Markdown Combine ensemble and test ###Code ens_model = vgg_ft_bn(2) for layer in ens_model.layers: layer.trainable=True def get_ens_pred(arr, fname): ens_pred = [] for i in range(5): i = str(i) ens_model.load_weights('{}{}{}.h5'.format(model_path, fname, i)) preds = ens_model.predict(arr, batch_size=batch_size) ens_pred.append(preds) return ens_pred val_pred2 = get_ens_pred(val, 'aug') val_avg_preds2 = np.stack(val_pred2).mean(axis=0) categorical_accuracy(val_labels, val_avg_preds2).eval().mean() ###Output _____no_output_____ ###Markdown Setup ###Code path = "data/dogscats/" # path = "data/dogscats/sample/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=128 # batch_size=1 batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. Found 12500 images belonging to 1 classes. ###Markdown In this notebook we're going to create an ensemble of models and use their average as our predictions. For each ensemble, we're going to follow our usual fine-tuning steps:1) Create a model that retrains just the last layer2) Add this to a model containing all VGG layers except the last layer3) Fine-tune just the dense layers of this model (pre-computing the convolutional layers)4) Add data augmentation, fine-tuning the dense layers without pre-computation.So first, we need to create our VGG model and pre-compute the output of the conv layers: ###Code model = Vgg16().model conv_layers,fc_layers = split_at(model, Convolution2D) conv_model = Sequential(conv_layers) val_features = conv_model.predict_generator(val_batches, int(np.ceil(val_batches.samples/batch_size))) trn_features = conv_model.predict_generator(batches, int(np.ceil(batches.samples/batch_size))) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) ###Output _____no_output_____ ###Markdown In the future we can just load these precomputed features: ###Code trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') ###Output _____no_output_____ ###Markdown We can also save some time by pre-computing the training and validation arrays with the image decoding and resizing already done: ###Code trn = get_data(path+'train') val = get_data(path+'valid') save_array(model_path+'train_data.bc', trn) save_array(model_path+'valid_data.bc', val) ###Output _____no_output_____ ###Markdown In the future we can just load these resized images: ###Code trn = load_array(model_path+'train_data.bc') val = load_array(model_path+'valid_data.bc') ###Output _____no_output_____ ###Markdown Finally, we can precompute the output of all but the last dropout and dense layers, for creating the first stage of the model: ###Code model.pop() model.pop() ll_val_feat = model.predict_generator(val_batches, int(np.ceil(val_batches.samples/batch_size))) ll_feat = model.predict_generator(batches, int(np.ceil(batches.samples/batch_size))) save_array(model_path + 'train_ll_feat.bc', ll_feat) save_array(model_path + 'valid_ll_feat.bc', ll_val_feat) ll_feat = load_array(model_path+ 'train_ll_feat.bc') ll_val_feat = load_array(model_path + 'valid_ll_feat.bc') ###Output _____no_output_____ ###Markdown ...and let's also grab the test data, for when we need to submit: ###Code test = get_data(path+'test') save_array(model_path+'test_data.bc', test) test = load_array(model_path+'test_data.bc') ###Output _____no_output_____ ###Markdown Last layer The functions automate creating a model that trains the last layer from scratch, and then adds those new layers on to the main model. ###Code def get_ll_layers(): return [ BatchNormalization(input_shape=(4096,)), Dropout(0.5), Dense(2, activation='softmax') ] def train_last_layer(i): ll_layers = get_ll_layers() ll_model = Sequential(ll_layers) ll_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) ll_model.optimizer.lr=1e-5 ll_model.fit(ll_feat, trn_labels, validation_data=(ll_val_feat, val_labels), epochs=12) ll_model.optimizer.lr=1e-7 ll_model.fit(ll_feat, trn_labels, validation_data=(ll_val_feat, val_labels), epochs=1) ll_model.save_weights(model_path+'ll_bn' + i + '.h5') vgg = Vgg16BN() model = vgg.model model.pop(); model.pop(); model.pop() for layer in model.layers: layer.trainable=False model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) ll_layers = get_ll_layers() for layer in ll_layers: model.add(layer) for l1,l2 in zip(ll_model.layers, model.layers[-3:]): l2.set_weights(l1.get_weights()) model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) model.save_weights(model_path+'bn' + i + '.h5') return model ###Output _____no_output_____ ###Markdown Dense model ###Code def get_conv_model(model): layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) fc_layers = layers[last_conv_idx+1:] return conv_model, fc_layers, last_conv_idx def get_fc_layers(p, in_shape): return [ MaxPooling2D(input_shape=in_shape), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(2, activation='softmax') ] def train_dense_layers(i, model): conv_model, fc_layers, last_conv_idx = get_conv_model(model) conv_shape = conv_model.output_shape[1:] fc_model = Sequential(get_fc_layers(0.5, conv_shape)) for l1,l2 in zip(fc_model.layers, fc_layers): weights = l2.get_weights() l1.set_weights(weights) fc_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) fc_model.fit(trn_features, trn_labels, epochs=2, batch_size=batch_size, validation_data=(val_features, val_labels)) # width_zoom_range removed from the following because not available in Keras2 gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.05, zoom_range=0.05, channel_shift_range=10, height_shift_range=0.05, shear_range=0.05, horizontal_flip=True) batches = gen.flow(trn, trn_labels, batch_size=batch_size) val_batches = image.ImageDataGenerator().flow(val, val_labels, shuffle=False, batch_size=batch_size) for layer in conv_model.layers: layer.trainable = False for layer in get_fc_layers(0.5, conv_shape): conv_model.add(layer) for l1,l2 in zip(conv_model.layers[last_conv_idx+1:], fc_model.layers): l1.set_weights(l2.get_weights()) steps_per_epoch = int(np.ceil(batches.n/batch_size)) validation_steps = int(np.ceil(val_batches.n/batch_size)) conv_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) conv_model.save_weights(model_path+'no_dropout_bn' + i + '.h5') conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1, validation_data=val_batches, validation_steps=validation_steps) for layer in conv_model.layers[16:]: layer.trainable = True conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=8, validation_data=val_batches, validation_steps=validation_steps) conv_model.optimizer.lr = 1e-7 conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=10, validation_data=val_batches, validation_steps=validation_steps) conv_model.save_weights(model_path + 'aug' + i + '.h5') ###Output _____no_output_____ ###Markdown Build ensemble ###Code for i in range(5): i = str(i) model = train_last_layer(i) train_dense_layers(i, model) ###Output Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 1s - loss: 0.8578 - acc: 0.6804 - val_loss: 2.2534 - val_acc: 0.3120 Epoch 2/12 23000/23000 [==============================] - 1s - loss: 0.6839 - acc: 0.7794 - val_loss: 2.4996 - val_acc: 0.2990 Epoch 3/12 23000/23000 [==============================] - 1s - loss: 0.6496 - acc: 0.7960 - val_loss: 2.4651 - val_acc: 0.2970 Epoch 4/12 23000/23000 [==============================] - 1s - loss: 0.6391 - acc: 0.7963 - val_loss: 2.4541 - val_acc: 0.2965 Epoch 5/12 23000/23000 [==============================] - 1s - loss: 0.6348 - acc: 0.8000 - val_loss: 2.4300 - val_acc: 0.2945 Epoch 6/12 23000/23000 [==============================] - 1s - loss: 0.6146 - acc: 0.7994 - val_loss: 2.3874 - val_acc: 0.2950 Epoch 7/12 23000/23000 [==============================] - 1s - loss: 0.6045 - acc: 0.8013 - val_loss: 2.3943 - val_acc: 0.2920 Epoch 8/12 23000/23000 [==============================] - 1s - loss: 0.5854 - acc: 0.8081 - val_loss: 2.3667 - val_acc: 0.2930 Epoch 9/12 23000/23000 [==============================] - 1s - loss: 0.5955 - acc: 0.8052 - val_loss: 2.3389 - val_acc: 0.2930 Epoch 10/12 23000/23000 [==============================] - 1s - loss: 0.5776 - acc: 0.8110 - val_loss: 2.2973 - val_acc: 0.2920 Epoch 11/12 23000/23000 [==============================] - 1s - loss: 0.5655 - acc: 0.8106 - val_loss: 2.2377 - val_acc: 0.2935 Epoch 12/12 23000/23000 [==============================] - 1s - loss: 0.5638 - acc: 0.8119 - val_loss: 2.2357 - val_acc: 0.2895 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 1s - loss: 0.5568 - acc: 0.8107 - val_loss: 2.2523 - val_acc: 0.2910 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 9s - loss: 0.0881 - acc: 0.9671 - val_loss: 0.0441 - val_acc: 0.9855 Epoch 2/2 23000/23000 [==============================] - 9s - loss: 0.0414 - acc: 0.9862 - val_loss: 0.0419 - val_acc: 0.9850 Epoch 1/1 180/180 [==============================] - 194s - loss: 0.0536 - acc: 0.9815 - val_loss: 0.0425 - val_acc: 0.9850 Epoch 1/8 180/180 [==============================] - 191s - loss: 0.0429 - acc: 0.9835 - val_loss: 0.0487 - val_acc: 0.9830 Epoch 2/8 180/180 [==============================] - 189s - loss: 0.0344 - acc: 0.9876 - val_loss: 0.0471 - val_acc: 0.9860 Epoch 3/8 180/180 [==============================] - 188s - loss: 0.0242 - acc: 0.9913 - val_loss: 0.0460 - val_acc: 0.9855 Epoch 4/8 180/180 [==============================] - 188s - loss: 0.0293 - acc: 0.9881 - val_loss: 0.0475 - val_acc: 0.9845 Epoch 5/8 180/180 [==============================] - 188s - loss: 0.0209 - acc: 0.9923 - val_loss: 0.0500 - val_acc: 0.9840 Epoch 6/8 180/180 [==============================] - 188s - loss: 0.0166 - acc: 0.9941 - val_loss: 0.0509 - val_acc: 0.9850 Epoch 7/8 180/180 [==============================] - 188s - loss: 0.0147 - acc: 0.9949 - val_loss: 0.0517 - val_acc: 0.9835 Epoch 8/8 180/180 [==============================] - 188s - loss: 0.0152 - acc: 0.9943 - val_loss: 0.0535 - val_acc: 0.9840 Epoch 1/10 180/180 [==============================] - 189s - loss: 0.0130 - acc: 0.9953 - val_loss: 0.0537 - val_acc: 0.9835 Epoch 2/10 180/180 [==============================] - 188s - loss: 0.0141 - acc: 0.9949 - val_loss: 0.0550 - val_acc: 0.9840 Epoch 3/10 180/180 [==============================] - 188s - loss: 0.0111 - acc: 0.9961 - val_loss: 0.0555 - val_acc: 0.9855 Epoch 4/10 180/180 [==============================] - 188s - loss: 0.0109 - acc: 0.9958 - val_loss: 0.0581 - val_acc: 0.9840 Epoch 5/10 180/180 [==============================] - 188s - loss: 0.0080 - acc: 0.9968 - val_loss: 0.0632 - val_acc: 0.9845 Epoch 6/10 180/180 [==============================] - 189s - loss: 0.0088 - acc: 0.9964 - val_loss: 0.0583 - val_acc: 0.9850 Epoch 7/10 180/180 [==============================] - 189s - loss: 0.0071 - acc: 0.9975 - val_loss: 0.0610 - val_acc: 0.9840 Epoch 8/10 180/180 [==============================] - 189s - loss: 0.0077 - acc: 0.9977 - val_loss: 0.0571 - val_acc: 0.9855 Epoch 9/10 180/180 [==============================] - 188s - loss: 0.0057 - acc: 0.9978 - val_loss: 0.0604 - val_acc: 0.9845 Epoch 10/10 180/180 [==============================] - 189s - loss: 0.0058 - acc: 0.9977 - val_loss: 0.0643 - val_acc: 0.9855 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 1s - loss: 0.8303 - acc: 0.6877 - val_loss: 2.2061 - val_acc: 0.3115 Epoch 2/12 23000/23000 [==============================] - 1s - loss: 0.6640 - acc: 0.7842 - val_loss: 2.4278 - val_acc: 0.3025 Epoch 3/12 23000/23000 [==============================] - 1s - loss: 0.6406 - acc: 0.7919 - val_loss: 2.4346 - val_acc: 0.2975 Epoch 4/12 23000/23000 [==============================] - 1s - loss: 0.6319 - acc: 0.7983 - val_loss: 2.4229 - val_acc: 0.2950 Epoch 5/12 23000/23000 [==============================] - 1s - loss: 0.6217 - acc: 0.8004 - val_loss: 2.3945 - val_acc: 0.2945 Epoch 6/12 23000/23000 [==============================] - 1s - loss: 0.6077 - acc: 0.8027 - val_loss: 2.3483 - val_acc: 0.2935 Epoch 7/12 23000/23000 [==============================] - 1s - loss: 0.6033 - acc: 0.8017 - val_loss: 2.3561 - val_acc: 0.2930 Epoch 8/12 23000/23000 [==============================] - 1s - loss: 0.5905 - acc: 0.8072 - val_loss: 2.3143 - val_acc: 0.2910 Epoch 9/12 23000/23000 [==============================] - 1s - loss: 0.5934 - acc: 0.8039 - val_loss: 2.2917 - val_acc: 0.2880 Epoch 10/12 23000/23000 [==============================] - 1s - loss: 0.5747 - acc: 0.8090 - val_loss: 2.3198 - val_acc: 0.2890 Epoch 11/12 23000/23000 [==============================] - 1s - loss: 0.5741 - acc: 0.8102 - val_loss: 2.2705 - val_acc: 0.2875 Epoch 12/12 23000/23000 [==============================] - 1s - loss: 0.5653 - acc: 0.8133 - val_loss: 2.2241 - val_acc: 0.2875 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 1s - loss: 0.5566 - acc: 0.8143 - val_loss: 2.2264 - val_acc: 0.2890 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 9s - loss: 0.0887 - acc: 0.9655 - val_loss: 0.0397 - val_acc: 0.9845 Epoch 2/2 23000/23000 [==============================] - 9s - loss: 0.0392 - acc: 0.9859 - val_loss: 0.0395 - val_acc: 0.9875 Epoch 1/1 180/180 [==============================] - 195s - loss: 0.0522 - acc: 0.9817 - val_loss: 0.0386 - val_acc: 0.9875 Epoch 1/8 180/180 [==============================] - 192s - loss: 0.0397 - acc: 0.9866 - val_loss: 0.0397 - val_acc: 0.9875 Epoch 2/8 180/180 [==============================] - 190s - loss: 0.0338 - acc: 0.9882 - val_loss: 0.0425 - val_acc: 0.9865 Epoch 3/8 180/180 [==============================] - 190s - loss: 0.0278 - acc: 0.9893 - val_loss: 0.0424 - val_acc: 0.9855 Epoch 4/8 180/180 [==============================] - 190s - loss: 0.0252 - acc: 0.9912 - val_loss: 0.0437 - val_acc: 0.9860 Epoch 5/8 180/180 [==============================] - 190s - loss: 0.0223 - acc: 0.9926 - val_loss: 0.0418 - val_acc: 0.9845 Epoch 6/8 180/180 [==============================] - 189s - loss: 0.0176 - acc: 0.9936 - val_loss: 0.0448 - val_acc: 0.9845 Epoch 7/8 180/180 [==============================] - 190s - loss: 0.0164 - acc: 0.9941 - val_loss: 0.0456 - val_acc: 0.9860 Epoch 8/8 180/180 [==============================] - 190s - loss: 0.0140 - acc: 0.9950 - val_loss: 0.0479 - val_acc: 0.9855 Epoch 1/10 180/180 [==============================] - 190s - loss: 0.0130 - acc: 0.9957 - val_loss: 0.0510 - val_acc: 0.9835 Epoch 2/10 180/180 [==============================] - 190s - loss: 0.0111 - acc: 0.9962 - val_loss: 0.0510 - val_acc: 0.9850 Epoch 3/10 180/180 [==============================] - 190s - loss: 0.0117 - acc: 0.9962 - val_loss: 0.0485 - val_acc: 0.9870 Epoch 4/10 180/180 [==============================] - 190s - loss: 0.0120 - acc: 0.9963 - val_loss: 0.0450 - val_acc: 0.9890 Epoch 5/10 180/180 [==============================] - 190s - loss: 0.0082 - acc: 0.9973 - val_loss: 0.0459 - val_acc: 0.9865 Epoch 6/10 180/180 [==============================] - 190s - loss: 0.0084 - acc: 0.9971 - val_loss: 0.0535 - val_acc: 0.9855 Epoch 7/10 180/180 [==============================] - 190s - loss: 0.0066 - acc: 0.9976 - val_loss: 0.0523 - val_acc: 0.9860 Epoch 8/10 180/180 [==============================] - 190s - loss: 0.0073 - acc: 0.9976 - val_loss: 0.0535 - val_acc: 0.9855 Epoch 9/10 180/180 [==============================] - 190s - loss: 0.0077 - acc: 0.9971 - val_loss: 0.0560 - val_acc: 0.9855 Epoch 10/10 180/180 [==============================] - 190s - loss: 0.0084 - acc: 0.9976 - val_loss: 0.0556 - val_acc: 0.9855 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 1s - loss: 0.8412 - acc: 0.6823 - val_loss: 2.1991 - val_acc: 0.3060 Epoch 2/12 23000/23000 [==============================] - 1s - loss: 0.6874 - acc: 0.7768 - val_loss: 2.4586 - val_acc: 0.3010 Epoch 3/12 23000/23000 [==============================] - 1s - loss: 0.6574 - acc: 0.7957 - val_loss: 2.4415 - val_acc: 0.2980 Epoch 4/12 23000/23000 [==============================] - 1s - loss: 0.6379 - acc: 0.7957 - val_loss: 2.4440 - val_acc: 0.2935 Epoch 5/12 23000/23000 [==============================] - 1s - loss: 0.6262 - acc: 0.7996 - val_loss: 2.4113 - val_acc: 0.2910 Epoch 6/12 23000/23000 [==============================] - 1s - loss: 0.6145 - acc: 0.8032 - val_loss: 2.3883 - val_acc: 0.2895 Epoch 7/12 23000/23000 [==============================] - 1s - loss: 0.6072 - acc: 0.8072 - val_loss: 2.3196 - val_acc: 0.2895 Epoch 8/12 23000/23000 [==============================] - 1s - loss: 0.6018 - acc: 0.8054 - val_loss: 2.3292 - val_acc: 0.2905 Epoch 9/12 23000/23000 [==============================] - 1s - loss: 0.5941 - acc: 0.8071 - val_loss: 2.2962 - val_acc: 0.2895 Epoch 10/12 23000/23000 [==============================] - 1s - loss: 0.5850 - acc: 0.8074 - val_loss: 2.2776 - val_acc: 0.2890 Epoch 11/12 23000/23000 [==============================] - 1s - loss: 0.5698 - acc: 0.8134 - val_loss: 2.2468 - val_acc: 0.2885 Epoch 12/12 23000/23000 [==============================] - 1s - loss: 0.5631 - acc: 0.8119 - val_loss: 2.2374 - val_acc: 0.2890 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 1s - loss: 0.5538 - acc: 0.8135 - val_loss: 2.2233 - val_acc: 0.2895 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 9s - loss: 0.0885 - acc: 0.9681 - val_loss: 0.0356 - val_acc: 0.9875 Epoch 2/2 23000/23000 [==============================] - 9s - loss: 0.0411 - acc: 0.9853 - val_loss: 0.0365 - val_acc: 0.9870 Epoch 1/1 180/180 [==============================] - 195s - loss: 0.0515 - acc: 0.9824 - val_loss: 0.0356 - val_acc: 0.9865 Epoch 1/8 180/180 [==============================] - 192s - loss: 0.0361 - acc: 0.9864 - val_loss: 0.0382 - val_acc: 0.9855 Epoch 2/8 180/180 [==============================] - 190s - loss: 0.0331 - acc: 0.9879 - val_loss: 0.0393 - val_acc: 0.9865 Epoch 3/8 180/180 [==============================] - 190s - loss: 0.0288 - acc: 0.9895 - val_loss: 0.0387 - val_acc: 0.9865 Epoch 4/8 180/180 [==============================] - 190s - loss: 0.0266 - acc: 0.9904 - val_loss: 0.0412 - val_acc: 0.9875 Epoch 5/8 180/180 [==============================] - 190s - loss: 0.0198 - acc: 0.9929 - val_loss: 0.0419 - val_acc: 0.9870 Epoch 6/8 180/180 [==============================] - 190s - loss: 0.0166 - acc: 0.9936 - val_loss: 0.0421 - val_acc: 0.9865 Epoch 7/8 180/180 [==============================] - 190s - loss: 0.0139 - acc: 0.9947 - val_loss: 0.0426 - val_acc: 0.9880 Epoch 8/8 180/180 [==============================] - 190s - loss: 0.0125 - acc: 0.9955 - val_loss: 0.0447 - val_acc: 0.9890 Epoch 1/10 180/180 [==============================] - 190s - loss: 0.0147 - acc: 0.9948 - val_loss: 0.0465 - val_acc: 0.9880 Epoch 2/10 180/180 [==============================] - 190s - loss: 0.0120 - acc: 0.9956 - val_loss: 0.0505 - val_acc: 0.9870 Epoch 3/10 180/180 [==============================] - 190s - loss: 0.0103 - acc: 0.9962 - val_loss: 0.0509 - val_acc: 0.9875 Epoch 4/10 180/180 [==============================] - 189s - loss: 0.0106 - acc: 0.9962 - val_loss: 0.0502 - val_acc: 0.9875 Epoch 5/10 180/180 [==============================] - 190s - loss: 0.0079 - acc: 0.9970 - val_loss: 0.0515 - val_acc: 0.9870 Epoch 6/10 180/180 [==============================] - 189s - loss: 0.0073 - acc: 0.9977 - val_loss: 0.0518 - val_acc: 0.9880 Epoch 7/10 180/180 [==============================] - 189s - loss: 0.0070 - acc: 0.9972 - val_loss: 0.0485 - val_acc: 0.9865 Epoch 8/10 180/180 [==============================] - 189s - loss: 0.0065 - acc: 0.9975 - val_loss: 0.0546 - val_acc: 0.9860 Epoch 9/10 180/180 [==============================] - 189s - loss: 0.0062 - acc: 0.9978 - val_loss: 0.0551 - val_acc: 0.9855 Epoch 10/10 180/180 [==============================] - 189s - loss: 0.0067 - acc: 0.9975 - val_loss: 0.0572 - val_acc: 0.9875 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 1s - loss: 0.8589 - acc: 0.6734 - val_loss: 2.1607 - val_acc: 0.3215 Epoch 2/12 23000/23000 [==============================] - 1s - loss: 0.6750 - acc: 0.7772 - val_loss: 2.4255 - val_acc: 0.3075 Epoch 3/12 23000/23000 [==============================] - 1s - loss: 0.6531 - acc: 0.7947 - val_loss: 2.4741 - val_acc: 0.3050 Epoch 4/12 23000/23000 [==============================] - 1s - loss: 0.6389 - acc: 0.7968 - val_loss: 2.4259 - val_acc: 0.3015 Epoch 5/12 23000/23000 [==============================] - 1s - loss: 0.6368 - acc: 0.7971 - val_loss: 2.4019 - val_acc: 0.3000 Epoch 6/12 23000/23000 [==============================] - 1s - loss: 0.6133 - acc: 0.8034 - val_loss: 2.4132 - val_acc: 0.2975 Epoch 7/12 23000/23000 [==============================] - 1s - loss: 0.6134 - acc: 0.8041 - val_loss: 2.3946 - val_acc: 0.2965 Epoch 8/12 23000/23000 [==============================] - 1s - loss: 0.5964 - acc: 0.8073 - val_loss: 2.3359 - val_acc: 0.2970 Epoch 9/12 23000/23000 [==============================] - 1s - loss: 0.5894 - acc: 0.8072 - val_loss: 2.2916 - val_acc: 0.2965 Epoch 10/12 23000/23000 [==============================] - 1s - loss: 0.5713 - acc: 0.8126 - val_loss: 2.3110 - val_acc: 0.2960 Epoch 11/12 23000/23000 [==============================] - 1s - loss: 0.5836 - acc: 0.8102 - val_loss: 2.2848 - val_acc: 0.2935 Epoch 12/12 23000/23000 [==============================] - 1s - loss: 0.5640 - acc: 0.8156 - val_loss: 2.2374 - val_acc: 0.2940 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 1s - loss: 0.5658 - acc: 0.8117 - val_loss: 2.2397 - val_acc: 0.2935 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 9s - loss: 0.0854 - acc: 0.9671 - val_loss: 0.0485 - val_acc: 0.9820 Epoch 2/2 23000/23000 [==============================] - 9s - loss: 0.0373 - acc: 0.9875 - val_loss: 0.0438 - val_acc: 0.9845 Epoch 1/1 180/180 [==============================] - 195s - loss: 0.0497 - acc: 0.9823 - val_loss: 0.0421 - val_acc: 0.9855 Epoch 1/8 180/180 [==============================] - 192s - loss: 0.0429 - acc: 0.9853 - val_loss: 0.0445 - val_acc: 0.9870 Epoch 2/8 180/180 [==============================] - 191s - loss: 0.0324 - acc: 0.9883 - val_loss: 0.0470 - val_acc: 0.9855 Epoch 3/8 180/180 [==============================] - 190s - loss: 0.0309 - acc: 0.9896 - val_loss: 0.0489 - val_acc: 0.9845 Epoch 4/8 180/180 [==============================] - 190s - loss: 0.0222 - acc: 0.9925 - val_loss: 0.0469 - val_acc: 0.9855 Epoch 5/8 180/180 [==============================] - 190s - loss: 0.0210 - acc: 0.9929 - val_loss: 0.0482 - val_acc: 0.9850 Epoch 6/8 180/180 [==============================] - 190s - loss: 0.0189 - acc: 0.9932 - val_loss: 0.0478 - val_acc: 0.9850 Epoch 7/8 ###Markdown Combine ensemble and test ###Code ens_model = vgg_ft_bn(2) for layer in ens_model.layers: layer.trainable=True def get_ens_pred(arr, fname): ens_pred = [] for i in range(5): i = str(i) ens_model.load_weights('{}{}{}.h5'.format(model_path, fname, i)) preds = ens_model.predict(arr, batch_size=batch_size) ens_pred.append(preds) return ens_pred val_pred2 = get_ens_pred(val, 'aug') val_avg_preds2 = np.stack(val_pred2).mean(axis=0) categorical_accuracy(val_labels, val_avg_preds2).eval().mean() ###Output _____no_output_____ ###Markdown Setup ###Code path = "data/dogscats/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=64 batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) ###Output Found 23000 images belonging to 2 classes. Found 2000 images belonging to 2 classes. Found 0 images belonging to 0 classes. ###Markdown In this notebook we're going to create an ensemble of models and use their average as our predictions. For each ensemble, we're going to follow our usual fine-tuning steps:1) Create a model that retrains just the last layer2) Add this to a model containing all VGG layers except the last layer3) Fine-tune just the dense layers of this model (pre-computing the convolutional layers)4) Add data augmentation, fine-tuning the dense layers without pre-computation.So first, we need to create our VGG model and pre-compute the output of the conv layers: ###Code model = Vgg16().model conv_layers,fc_layers = split_at(model, Convolution2D) conv_model = Sequential(conv_layers) val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) ###Output _____no_output_____ ###Markdown In the future we can just load these precomputed features: ###Code trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') ###Output _____no_output_____ ###Markdown We can also save some time by pre-computing the training and validation arrays with the image decoding and resizing already done: ###Code trn = get_data(path+'train') val = get_data(path+'valid') save_array(model_path+'train_data.bc', trn) save_array(model_path+'valid_data.bc', val) ###Output _____no_output_____ ###Markdown In the future we can just load these resized images: ###Code trn = load_array(model_path+'train_data.bc') val = load_array(model_path+'valid_data.bc') ###Output _____no_output_____ ###Markdown Finally, we can precompute the output of all but the last dropout and dense layers, for creating the first stage of the model: ###Code model.pop() model.pop() ll_val_feat = model.predict_generator(val_batches, val_batches.nb_sample) ll_feat = model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_ll_feat.bc', ll_feat) save_array(model_path + 'valid_ll_feat.bc', ll_val_feat) ll_feat = load_array(model_path+ 'train_ll_feat.bc') ll_val_feat = load_array(model_path + 'valid_ll_feat.bc') ###Output _____no_output_____ ###Markdown ...and let's also grab the test data, for when we need to submit: ###Code test = get_data(path+'test') save_array(model_path+'test_data.bc', test) test = load_array(model_path+'test_data.bc') ###Output _____no_output_____ ###Markdown Last layer The functions automate creating a model that trains the last layer from scratch, and then adds those new layers on to the main model. ###Code def get_ll_layers(): return [ BatchNormalization(input_shape=(4096,)), Dropout(0.5), Dense(2, activation='softmax') ] def train_last_layer(i): ll_layers = get_ll_layers() ll_model = Sequential(ll_layers) ll_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) ll_model.optimizer.lr=1e-5 ll_model.fit(ll_feat, trn_labels, validation_data=(ll_val_feat, val_labels), nb_epoch=12) ll_model.optimizer.lr=1e-7 ll_model.fit(ll_feat, trn_labels, validation_data=(ll_val_feat, val_labels), nb_epoch=1) ll_model.save_weights(model_path+'ll_bn' + i + '.h5') vgg = Vgg16() model = vgg.model model.pop(); model.pop(); model.pop() for layer in model.layers: layer.trainable=False model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) ll_layers = get_ll_layers() for layer in ll_layers: model.add(layer) for l1,l2 in zip(ll_model.layers, model.layers[-3:]): l2.set_weights(l1.get_weights()) model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) model.save_weights(model_path+'bn' + i + '.h5') return model ###Output _____no_output_____ ###Markdown Dense model ###Code def get_conv_model(model): layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) fc_layers = layers[last_conv_idx+1:] return conv_model, fc_layers, last_conv_idx def get_fc_layers(p, in_shape): return [ MaxPooling2D(input_shape=in_shape), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(2, activation='softmax') ] def train_dense_layers(i, model): conv_model, fc_layers, last_conv_idx = get_conv_model(model) conv_shape = conv_model.output_shape[1:] fc_model = Sequential(get_fc_layers(0.5, conv_shape)) for l1,l2 in zip(fc_model.layers, fc_layers): weights = l2.get_weights() l1.set_weights(weights) fc_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) fc_model.fit(trn_features, trn_labels, nb_epoch=2, batch_size=batch_size, validation_data=(val_features, val_labels)) gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.05, width_zoom_range=0.05, zoom_range=0.05, channel_shift_range=10, height_shift_range=0.05, shear_range=0.05, horizontal_flip=True) batches = gen.flow(trn, trn_labels, batch_size=batch_size) val_batches = image.ImageDataGenerator().flow(val, val_labels, shuffle=False, batch_size=batch_size) for layer in conv_model.layers: layer.trainable = False for layer in get_fc_layers(0.5, conv_shape): conv_model.add(layer) for l1,l2 in zip(conv_model.layers[last_conv_idx+1:], fc_model.layers): l1.set_weights(l2.get_weights()) conv_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) conv_model.save_weights(model_path+'no_dropout_bn' + i + '.h5') conv_model.fit_generator(batches, samples_per_epoch=batches.N, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.N) for layer in conv_model.layers[16:]: layer.trainable = True conv_model.fit_generator(batches, samples_per_epoch=batches.N, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.N) conv_model.optimizer.lr = 1e-7 conv_model.fit_generator(batches, samples_per_epoch=batches.N, nb_epoch=10, validation_data=val_batches, nb_val_samples=val_batches.N) conv_model.save_weights(model_path + 'aug' + i + '.h5') ###Output _____no_output_____ ###Markdown Build ensemble ###Code for i in range(5): i = str(i) model = train_last_layer(i) train_dense_layers(i, model) ###Output Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 0s - loss: 0.5184 - acc: 0.7895 - val_loss: 0.1549 - val_acc: 0.9440 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.1984 - acc: 0.9237 - val_loss: 0.0941 - val_acc: 0.9670 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1524 - acc: 0.9426 - val_loss: 0.0762 - val_acc: 0.9735 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1247 - acc: 0.9542 - val_loss: 0.0662 - val_acc: 0.9740 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1128 - acc: 0.9567 - val_loss: 0.0609 - val_acc: 0.9760 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1043 - acc: 0.9635 - val_loss: 0.0560 - val_acc: 0.9775 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.1010 - acc: 0.9640 - val_loss: 0.0548 - val_acc: 0.9790 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0980 - acc: 0.9650 - val_loss: 0.0526 - val_acc: 0.9780 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0926 - acc: 0.9656 - val_loss: 0.0513 - val_acc: 0.9785 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0881 - acc: 0.9680 - val_loss: 0.0500 - val_acc: 0.9795 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0933 - acc: 0.9666 - val_loss: 0.0497 - val_acc: 0.9800 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0842 - acc: 0.9693 - val_loss: 0.0484 - val_acc: 0.9805 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0824 - acc: 0.9696 - val_loss: 0.0486 - val_acc: 0.9805 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0798 - acc: 0.9719 - val_loss: 0.0500 - val_acc: 0.9830 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0415 - acc: 0.9853 - val_loss: 0.0551 - val_acc: 0.9840 Epoch 1/1 23000/23000 [==============================] - 271s - loss: 0.0559 - acc: 0.9814 - val_loss: 0.0578 - val_acc: 0.9825 Epoch 1/8 23000/23000 [==============================] - 271s - loss: 0.0515 - acc: 0.9834 - val_loss: 0.0645 - val_acc: 0.9860 Epoch 2/8 23000/23000 [==============================] - 271s - loss: 0.0385 - acc: 0.9875 - val_loss: 0.0670 - val_acc: 0.9850 Epoch 3/8 23000/23000 [==============================] - 271s - loss: 0.0313 - acc: 0.9890 - val_loss: 0.0715 - val_acc: 0.9850 Epoch 4/8 23000/23000 [==============================] - 271s - loss: 0.0287 - acc: 0.9903 - val_loss: 0.0733 - val_acc: 0.9840 Epoch 5/8 23000/23000 [==============================] - 271s - loss: 0.0244 - acc: 0.9924 - val_loss: 0.0773 - val_acc: 0.9840 Epoch 6/8 23000/23000 [==============================] - 271s - loss: 0.0205 - acc: 0.9927 - val_loss: 0.0900 - val_acc: 0.9845 Epoch 7/8 23000/23000 [==============================] - 271s - loss: 0.0209 - acc: 0.9929 - val_loss: 0.0860 - val_acc: 0.9865 Epoch 8/8 23000/23000 [==============================] - 420s - loss: 0.0186 - acc: 0.9930 - val_loss: 0.0923 - val_acc: 0.9845 Epoch 1/10 23000/23000 [==============================] - 315s - loss: 0.0196 - acc: 0.9930 - val_loss: 0.0909 - val_acc: 0.9845 Epoch 2/10 23000/23000 [==============================] - 362s - loss: 0.0165 - acc: 0.9945 - val_loss: 0.1023 - val_acc: 0.9830 Epoch 3/10 23000/23000 [==============================] - 447s - loss: 0.0179 - acc: 0.9940 - val_loss: 0.0871 - val_acc: 0.9845 Epoch 4/10 23000/23000 [==============================] - 601s - loss: 0.0112 - acc: 0.9960 - val_loss: 0.1030 - val_acc: 0.9830 Epoch 5/10 23000/23000 [==============================] - 528s - loss: 0.0130 - acc: 0.9956 - val_loss: 0.0946 - val_acc: 0.9830 Epoch 6/10 23000/23000 [==============================] - 657s - loss: 0.0110 - acc: 0.9961 - val_loss: 0.0904 - val_acc: 0.9850 Epoch 7/10 23000/23000 [==============================] - 621s - loss: 0.0116 - acc: 0.9963 - val_loss: 0.0872 - val_acc: 0.9865 Epoch 8/10 23000/23000 [==============================] - 603s - loss: 0.0118 - acc: 0.9960 - val_loss: 0.0813 - val_acc: 0.9870 Epoch 9/10 23000/23000 [==============================] - 616s - loss: 0.0100 - acc: 0.9967 - val_loss: 0.1053 - val_acc: 0.9835 Epoch 10/10 23000/23000 [==============================] - 661s - loss: 0.0098 - acc: 0.9968 - val_loss: 0.0970 - val_acc: 0.9840 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 0s - loss: 0.5106 - acc: 0.7935 - val_loss: 0.1504 - val_acc: 0.9455 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.2005 - acc: 0.9241 - val_loss: 0.0890 - val_acc: 0.9680 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1465 - acc: 0.9444 - val_loss: 0.0714 - val_acc: 0.9745 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1280 - acc: 0.9540 - val_loss: 0.0614 - val_acc: 0.9765 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1131 - acc: 0.9586 - val_loss: 0.0561 - val_acc: 0.9795 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1079 - acc: 0.9620 - val_loss: 0.0515 - val_acc: 0.9795 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.0998 - acc: 0.9631 - val_loss: 0.0484 - val_acc: 0.9825 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0947 - acc: 0.9673 - val_loss: 0.0457 - val_acc: 0.9845 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0913 - acc: 0.9676 - val_loss: 0.0449 - val_acc: 0.9855 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0921 - acc: 0.9670 - val_loss: 0.0451 - val_acc: 0.9845 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0893 - acc: 0.9681 - val_loss: 0.0441 - val_acc: 0.9840 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0836 - acc: 0.9691 - val_loss: 0.0428 - val_acc: 0.9850 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0833 - acc: 0.9718 - val_loss: 0.0434 - val_acc: 0.9850 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0814 - acc: 0.9736 - val_loss: 0.0463 - val_acc: 0.9850 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0389 - acc: 0.9859 - val_loss: 0.0493 - val_acc: 0.9850 Epoch 1/1 23000/23000 [==============================] - 271s - loss: 0.0613 - acc: 0.9807 - val_loss: 0.0563 - val_acc: 0.9855 Epoch 1/8 23000/23000 [==============================] - 325s - loss: 0.0450 - acc: 0.9860 - val_loss: 0.0685 - val_acc: 0.9840 Epoch 2/8 23000/23000 [==============================] - 766s - loss: 0.0364 - acc: 0.9877 - val_loss: 0.0616 - val_acc: 0.9845 Epoch 3/8 23000/23000 [==============================] - 600s - loss: 0.0338 - acc: 0.9891 - val_loss: 0.0585 - val_acc: 0.9845 Epoch 4/8 23000/23000 [==============================] - 634s - loss: 0.0288 - acc: 0.9903 - val_loss: 0.0740 - val_acc: 0.9845 Epoch 5/8 23000/23000 [==============================] - 791s - loss: 0.0265 - acc: 0.9904 - val_loss: 0.0789 - val_acc: 0.9840 Epoch 6/8 23000/23000 [==============================] - 780s - loss: 0.0254 - acc: 0.9909 - val_loss: 0.0853 - val_acc: 0.9855 Epoch 7/8 23000/23000 [==============================] - 680s - loss: 0.0180 - acc: 0.9937 - val_loss: 0.0747 - val_acc: 0.9870 Epoch 8/8 23000/23000 [==============================] - 776s - loss: 0.0191 - acc: 0.9939 - val_loss: 0.0871 - val_acc: 0.9845 Epoch 1/10 23000/23000 [==============================] - 712s - loss: 0.0191 - acc: 0.9929 - val_loss: 0.0943 - val_acc: 0.9855 Epoch 2/10 23000/23000 [==============================] - 679s - loss: 0.0175 - acc: 0.9946 - val_loss: 0.0723 - val_acc: 0.9850 Epoch 3/10 23000/23000 [==============================] - 640s - loss: 0.0148 - acc: 0.9949 - val_loss: 0.0756 - val_acc: 0.9845 Epoch 4/10 23000/23000 [==============================] - 761s - loss: 0.0147 - acc: 0.9953 - val_loss: 0.0772 - val_acc: 0.9850 Epoch 5/10 23000/23000 [==============================] - 733s - loss: 0.0163 - acc: 0.9946 - val_loss: 0.0931 - val_acc: 0.9830 Epoch 6/10 23000/23000 [==============================] - 574s - loss: 0.0107 - acc: 0.9967 - val_loss: 0.0874 - val_acc: 0.9845 Epoch 7/10 23000/23000 [==============================] - 611s - loss: 0.0123 - acc: 0.9958 - val_loss: 0.0918 - val_acc: 0.9855 Epoch 8/10 23000/23000 [==============================] - 668s - loss: 0.0098 - acc: 0.9965 - val_loss: 0.0896 - val_acc: 0.9855 Epoch 9/10 23000/23000 [==============================] - 624s - loss: 0.0096 - acc: 0.9964 - val_loss: 0.1012 - val_acc: 0.9850 Epoch 10/10 23000/23000 [==============================] - 747s - loss: 0.0113 - acc: 0.9960 - val_loss: 0.0961 - val_acc: 0.9835 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 1s - loss: 0.5167 - acc: 0.7867 - val_loss: 0.1299 - val_acc: 0.9555 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.1922 - acc: 0.9265 - val_loss: 0.0803 - val_acc: 0.9695 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1461 - acc: 0.9454 - val_loss: 0.0646 - val_acc: 0.9745 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1255 - acc: 0.9536 - val_loss: 0.0543 - val_acc: 0.9790 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1113 - acc: 0.9608 - val_loss: 0.0505 - val_acc: 0.9820 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1058 - acc: 0.9607 - val_loss: 0.0464 - val_acc: 0.9825 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.0957 - acc: 0.9654 - val_loss: 0.0448 - val_acc: 0.9840 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0964 - acc: 0.9657 - val_loss: 0.0427 - val_acc: 0.9850 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0996 - acc: 0.9662 - val_loss: 0.0420 - val_acc: 0.9860 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0931 - acc: 0.9670 - val_loss: 0.0408 - val_acc: 0.9855 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0899 - acc: 0.9680 - val_loss: 0.0395 - val_acc: 0.9860 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0837 - acc: 0.9717 - val_loss: 0.0390 - val_acc: 0.9860 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0861 - acc: 0.9703 - val_loss: 0.0391 - val_acc: 0.9865 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0796 - acc: 0.9735 - val_loss: 0.0382 - val_acc: 0.9855 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0353 - acc: 0.9874 - val_loss: 0.0364 - val_acc: 0.9880 Epoch 1/1 23000/23000 [==============================] - 271s - loss: 0.0622 - acc: 0.9802 - val_loss: 0.0490 - val_acc: 0.9870 Epoch 1/8 23000/23000 [==============================] - 773s - loss: 0.0426 - acc: 0.9856 - val_loss: 0.0442 - val_acc: 0.9885 Epoch 2/8 23000/23000 [==============================] - 774s - loss: 0.0394 - acc: 0.9864 - val_loss: 0.0501 - val_acc: 0.9885 Epoch 3/8 23000/23000 [==============================] - 687s - loss: 0.0329 - acc: 0.9881 - val_loss: 0.0500 - val_acc: 0.9875 Epoch 4/8 23000/23000 [==============================] - 655s - loss: 0.0292 - acc: 0.9900 - val_loss: 0.0535 - val_acc: 0.9870 Epoch 5/8 23000/23000 [==============================] - 791s - loss: 0.0268 - acc: 0.9914 - val_loss: 0.0605 - val_acc: 0.9855 Epoch 6/8 23000/23000 [==============================] - 789s - loss: 0.0208 - acc: 0.9926 - val_loss: 0.0591 - val_acc: 0.9850 Epoch 7/8 23000/23000 [==============================] - 798s - loss: 0.0191 - acc: 0.9931 - val_loss: 0.0638 - val_acc: 0.9860 Epoch 8/8 23000/23000 [==============================] - 708s - loss: 0.0192 - acc: 0.9932 - val_loss: 0.0597 - val_acc: 0.9850 Epoch 1/10 23000/23000 [==============================] - 606s - loss: 0.0178 - acc: 0.9942 - val_loss: 0.0620 - val_acc: 0.9860 Epoch 2/10 23000/23000 [==============================] - 756s - loss: 0.0158 - acc: 0.9941 - val_loss: 0.0694 - val_acc: 0.9850 Epoch 3/10 23000/23000 [==============================] - 418s - loss: 0.0176 - acc: 0.9939 - val_loss: 0.0641 - val_acc: 0.9855 Epoch 4/10 23000/23000 [==============================] - 271s - loss: 0.0118 - acc: 0.9958 - val_loss: 0.0623 - val_acc: 0.9840 Epoch 5/10 23000/23000 [==============================] - 271s - loss: 0.0150 - acc: 0.9947 - val_loss: 0.0649 - val_acc: 0.9865 Epoch 6/10 23000/23000 [==============================] - 271s - loss: 0.0119 - acc: 0.9961 - val_loss: 0.0595 - val_acc: 0.9880 Epoch 7/10 23000/23000 [==============================] - 304s - loss: 0.0121 - acc: 0.9957 - val_loss: 0.0668 - val_acc: 0.9885 Epoch 8/10 23000/23000 [==============================] - 273s - loss: 0.0124 - acc: 0.9960 - val_loss: 0.0619 - val_acc: 0.9885 Epoch 9/10 23000/23000 [==============================] - 271s - loss: 0.0099 - acc: 0.9963 - val_loss: 0.0649 - val_acc: 0.9865 Epoch 10/10 23000/23000 [==============================] - 273s - loss: 0.0091 - acc: 0.9970 - val_loss: 0.0628 - val_acc: 0.9890 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 0s - loss: 0.4585 - acc: 0.8130 - val_loss: 0.1306 - val_acc: 0.9515 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.1920 - acc: 0.9270 - val_loss: 0.0863 - val_acc: 0.9655 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1504 - acc: 0.9450 - val_loss: 0.0705 - val_acc: 0.9740 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1275 - acc: 0.9529 - val_loss: 0.0592 - val_acc: 0.9795 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1190 - acc: 0.9555 - val_loss: 0.0555 - val_acc: 0.9815 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1068 - acc: 0.9609 - val_loss: 0.0536 - val_acc: 0.9805 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.1003 - acc: 0.9624 - val_loss: 0.0496 - val_acc: 0.9830 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0979 - acc: 0.9660 - val_loss: 0.0482 - val_acc: 0.9825 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0913 - acc: 0.9678 - val_loss: 0.0475 - val_acc: 0.9830 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0917 - acc: 0.9666 - val_loss: 0.0458 - val_acc: 0.9825 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0980 - acc: 0.9665 - val_loss: 0.0454 - val_acc: 0.9840 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0919 - acc: 0.9675 - val_loss: 0.0443 - val_acc: 0.9840 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0883 - acc: 0.9685 - val_loss: 0.0440 - val_acc: 0.9850 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0825 - acc: 0.9720 - val_loss: 0.0437 - val_acc: 0.9850 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0359 - acc: 0.9874 - val_loss: 0.0474 - val_acc: 0.9850 Epoch 1/1 23000/23000 [==============================] - 272s - loss: 0.0581 - acc: 0.9817 - val_loss: 0.0562 - val_acc: 0.9850 Epoch 1/8 23000/23000 [==============================] - 520s - loss: 0.0486 - acc: 0.9833 - val_loss: 0.0590 - val_acc: 0.9830 Epoch 2/8 23000/23000 [==============================] - 745s - loss: 0.0379 - acc: 0.9867 - val_loss: 0.0595 - val_acc: 0.9840 Epoch 3/8 23000/23000 [==============================] - 736s - loss: 0.0329 - acc: 0.9881 - val_loss: 0.0628 - val_acc: 0.9840 Epoch 4/8 23000/23000 [==============================] - 708s - loss: 0.0260 - acc: 0.9903 - val_loss: 0.0722 - val_acc: 0.9855 Epoch 5/8 23000/23000 [==============================] - 700s - loss: 0.0250 - acc: 0.9921 - val_loss: 0.0734 - val_acc: 0.9840 Epoch 6/8 23000/23000 [==============================] - 802s - loss: 0.0212 - acc: 0.9923 - val_loss: 0.0721 - val_acc: 0.9845 Epoch 7/8 23000/23000 [==============================] - 765s - loss: 0.0211 - acc: 0.9928 - val_loss: 0.0772 - val_acc: 0.9835 Epoch 8/8 23000/23000 [==============================] - 743s - loss: 0.0185 - acc: 0.9933 - val_loss: 0.0756 - val_acc: 0.9835 Epoch 1/10 23000/23000 [==============================] - 782s - loss: 0.0168 - acc: 0.9941 - val_loss: 0.0815 - val_acc: 0.9860 Epoch 2/10 23000/23000 [==============================] - 580s - loss: 0.0155 - acc: 0.9942 - val_loss: 0.0771 - val_acc: 0.9840 Epoch 3/10 23000/23000 [==============================] - 654s - loss: 0.0142 - acc: 0.9954 - val_loss: 0.0789 - val_acc: 0.9850 Epoch 4/10 23000/23000 [==============================] - 692s - loss: 0.0141 - acc: 0.9955 - val_loss: 0.0716 - val_acc: 0.9870 Epoch 5/10 23000/23000 [==============================] - 607s - loss: 0.0120 - acc: 0.9959 - val_loss: 0.0757 - val_acc: 0.9850 Epoch 6/10 23000/23000 [==============================] - 789s - loss: 0.0129 - acc: 0.9956 - val_loss: 0.0741 - val_acc: 0.9860 Epoch 7/10 23000/23000 [==============================] - 767s - loss: 0.0111 - acc: 0.9960 - val_loss: 0.0747 - val_acc: 0.9865 Epoch 8/10 23000/23000 [==============================] - 557s - loss: 0.0103 - acc: 0.9967 - val_loss: 0.0774 - val_acc: 0.9870 Epoch 9/10 23000/23000 [==============================] - 521s - loss: 0.0106 - acc: 0.9962 - val_loss: 0.0855 - val_acc: 0.9855 Epoch 10/10 23000/23000 [==============================] - 484s - loss: 0.0095 - acc: 0.9970 - val_loss: 0.0780 - val_acc: 0.9850 Train on 23000 samples, validate on 2000 samples Epoch 1/12 23000/23000 [==============================] - 0s - loss: 0.5435 - acc: 0.7783 - val_loss: 0.1669 - val_acc: 0.9440 Epoch 2/12 23000/23000 [==============================] - 0s - loss: 0.2054 - acc: 0.9227 - val_loss: 0.0999 - val_acc: 0.9675 Epoch 3/12 23000/23000 [==============================] - 0s - loss: 0.1549 - acc: 0.9405 - val_loss: 0.0763 - val_acc: 0.9725 Epoch 4/12 23000/23000 [==============================] - 0s - loss: 0.1327 - acc: 0.9520 - val_loss: 0.0642 - val_acc: 0.9755 Epoch 5/12 23000/23000 [==============================] - 0s - loss: 0.1147 - acc: 0.9573 - val_loss: 0.0590 - val_acc: 0.9790 Epoch 6/12 23000/23000 [==============================] - 0s - loss: 0.1078 - acc: 0.9605 - val_loss: 0.0545 - val_acc: 0.9815 Epoch 7/12 23000/23000 [==============================] - 0s - loss: 0.1001 - acc: 0.9631 - val_loss: 0.0526 - val_acc: 0.9820 Epoch 8/12 23000/23000 [==============================] - 0s - loss: 0.0977 - acc: 0.9654 - val_loss: 0.0515 - val_acc: 0.9815 Epoch 9/12 23000/23000 [==============================] - 0s - loss: 0.0937 - acc: 0.9660 - val_loss: 0.0497 - val_acc: 0.9825 Epoch 10/12 23000/23000 [==============================] - 0s - loss: 0.0942 - acc: 0.9683 - val_loss: 0.0489 - val_acc: 0.9835 Epoch 11/12 23000/23000 [==============================] - 0s - loss: 0.0904 - acc: 0.9687 - val_loss: 0.0473 - val_acc: 0.9830 Epoch 12/12 23000/23000 [==============================] - 0s - loss: 0.0855 - acc: 0.9689 - val_loss: 0.0469 - val_acc: 0.9840 Train on 23000 samples, validate on 2000 samples Epoch 1/1 23000/23000 [==============================] - 0s - loss: 0.0861 - acc: 0.9685 - val_loss: 0.0470 - val_acc: 0.9840 Train on 23000 samples, validate on 2000 samples Epoch 1/2 23000/23000 [==============================] - 21s - loss: 0.0846 - acc: 0.9719 - val_loss: 0.0510 - val_acc: 0.9845 Epoch 2/2 23000/23000 [==============================] - 21s - loss: 0.0392 - acc: 0.9866 - val_loss: 0.0548 - val_acc: 0.9860 Epoch 1/1 23000/23000 [==============================] - 273s - loss: 0.0585 - acc: 0.9800 - val_loss: 0.0608 - val_acc: 0.9875 Epoch 1/8 23000/23000 [==============================] - 677s - loss: 0.0456 - acc: 0.9845 - val_loss: 0.0690 - val_acc: 0.9840 Epoch 2/8 23000/23000 [==============================] - 654s - loss: 0.0398 - acc: 0.9859 - val_loss: 0.0763 - val_acc: 0.9835 Epoch 3/8 23000/23000 [==============================] - 711s - loss: 0.0304 - acc: 0.9894 - val_loss: 0.0662 - val_acc: 0.9840 Epoch 4/8 23000/23000 [==============================] - 646s - loss: 0.0252 - acc: 0.9913 - val_loss: 0.0747 - val_acc: 0.9845 Epoch 5/8 23000/23000 [==============================] - 726s - loss: 0.0246 - acc: 0.9909 - val_loss: 0.0809 - val_acc: 0.9850 Epoch 6/8 23000/23000 [==============================] - 582s - loss: 0.0182 - acc: 0.9933 - val_loss: 0.0715 - val_acc: 0.9850 Epoch 7/8 23000/23000 [==============================] - 627s - loss: 0.0201 - acc: 0.9928 - val_loss: 0.0789 - val_acc: 0.9850 Epoch 8/8 23000/23000 [==============================] - 674s - loss: 0.0172 - acc: 0.9944 - val_loss: 0.0717 - val_acc: 0.9855 Epoch 1/10 23000/23000 [==============================] - 736s - loss: 0.0171 - acc: 0.9939 - val_loss: 0.0820 - val_acc: 0.9850 Epoch 2/10 23000/23000 [==============================] - 634s - loss: 0.0184 - acc: 0.9941 - val_loss: 0.0829 - val_acc: 0.9860 Epoch 3/10 23000/23000 [==============================] - 599s - loss: 0.0156 - acc: 0.9946 - val_loss: 0.0863 - val_acc: 0.9865 Epoch 4/10 23000/23000 [==============================] - 717s - loss: 0.0142 - acc: 0.9952 - val_loss: 0.0903 - val_acc: 0.9850 Epoch 5/10 23000/23000 [==============================] - 809s - loss: 0.0116 - acc: 0.9960 - val_loss: 0.0883 - val_acc: 0.9860 Epoch 6/10 23000/23000 [==============================] - 754s - loss: 0.0127 - acc: 0.9953 - val_loss: 0.0887 - val_acc: 0.9855 Epoch 7/10 23000/23000 [==============================] - 499s - loss: 0.0100 - acc: 0.9964 - val_loss: 0.0835 - val_acc: 0.9850 Epoch 8/10 23000/23000 [==============================] - 317s - loss: 0.0090 - acc: 0.9971 - val_loss: 0.0804 - val_acc: 0.9870 Epoch 9/10 23000/23000 [==============================] - 301s - loss: 0.0111 - acc: 0.9963 - val_loss: 0.0869 - val_acc: 0.9865 Epoch 10/10 23000/23000 [==============================] - 442s - loss: 0.0079 - acc: 0.9971 - val_loss: 0.0805 - val_acc: 0.9870 ###Markdown Combine ensemble and test ###Code ens_model = vgg_ft(2) for layer in ens_model.layers: layer.trainable=True def get_ens_pred(arr, fname): ens_pred = [] for i in range(5): i = str(i) ens_model.load_weights('{}{}{}.h5'.format(model_path, fname, i)) preds = ens_model.predict(arr, batch_size=batch_size) ens_pred.append(preds) return ens_pred val_pred2 = get_ens_pred(val, 'aug') val_avg_preds2 = np.stack(val_pred2).mean(axis=0) categorical_accuracy(val_labels, val_avg_preds2).eval() ###Output _____no_output_____
DAY3-TASK1/DAY3-TASK1-TEAM-TWEEK.ipynb
###Markdown CHECKING FOR COUNT OF NULL VALUES ###Code df.isnull().sum() df.isnull().sum()/df.shape[0]*100.0 df=df.drop(columns="Unnamed: 0") df.shape df1=df.copy() ###Output _____no_output_____ ###Markdown CLEANING -"wind_speed" DISTPLOT FOR - "wind_speed" ###Code sns.distplot(df["wind_speed"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1=df1.drop(df1[df1['wind_speed'] < 0].index) sns.distplot(df1["wind_speed"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1['wind_speed'].interpolate(method='polynomial',order=5, direction = 'both', inplace=True) df1["wind_speed"].isnull().sum() sns.distplot(df1["wind_speed"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) ###Output C:\ProgramData\Anaconda3\lib\site-packages\seaborn\distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) ###Markdown CLEANING -"wind_direction" FINDING THE MOST OCCURRING WIND DIRECTION ###Code index = df1["wind_direction"] index.value_counts() df1["wind_direction"]=df["wind_direction"].fillna("NE") df1["wind_direction"].isnull().sum() ###Output _____no_output_____ ###Markdown CLEANING - "rain" DISTPLOT FOR- "rain" ###Code sns.distplot(df1["rain"], bins=20, kde_kws={'linewidth':3, 'color':'#DC143C'}) #df1["rain"]=df["rain"].ffill() df1['rain'].interpolate(method='linear', direction = 'both', inplace=True) df1["rain"].isnull().sum() ###Output _____no_output_____ ###Markdown CLEANING -"pressure" CONVERTING OBJECT TO NUMERIC DATA TYPE ###Code df1["pressure"][0]="1023.7" df1["pressure"]=pd.to_numeric(df1["pressure"]) ###Output <ipython-input-21-36eff2814c0e>:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df1["pressure"][0]="1023.7" ###Markdown DISTPLOT FOR PRESSURE ###Code sns.distplot(df1["pressure"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1.drop(df1[df1['pressure'] < 0].index, inplace = True) sns.distplot(df1["pressure"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1['pressure'].interpolate(method='polynomial',order=5, direction = 'both', inplace=True) sns.distplot(df1["pressure"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1["pressure"].isnull().sum() ###Output _____no_output_____ ###Markdown CLEANING-"temperature" CONVERTING TEMPERATURE TO KELVIN ###Code df1["temperature"]=df1["temperature"]+273 df1["temperature"].isnull().sum() ###Output _____no_output_____ ###Markdown DISTPLOT FOR TEMPERATURE ###Code sns.distplot(df1["temperature"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1['temperature'].interpolate(method='polynomial',order=5, direction = 'both', inplace=True) df1["temperature"].isnull().sum() ###Output _____no_output_____ ###Markdown CLEANING-"PM2.5" ###Code df1["PM2.5"] df1["PM2.5"].isnull().sum() ###Output _____no_output_____ ###Markdown HISTPLOT FOR -"PM2.5" ###Code plt.hist(df1["PM2.5"],bins=[0,10000,20000,30000,40000],color='#abcdef') sns.distplot(df1["PM2.5"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1["PM2.5"].value_counts() df1['PM2.5'].interpolate(method='linear', direction = 'both', inplace=True) ###Output _____no_output_____ ###Markdown CLEANING-"hour" ###Code df1["PM2.5"].isnull().sum() ###Output _____no_output_____ ###Markdown DISTPLOT FOR-"hour" ###Code sns.distplot(df1["hour"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1=df1.drop(df1[df1['hour'] < 0].index) df1=df1.drop(df1[df1['hour'] > 23].index) sns.distplot(df1["hour"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1['hour'].interpolate(method='linear', direction = 'both', inplace=True) df1.hour.isnull().sum() sns.distplot(df1["hour"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) index = df1["hour"] index.value_counts() ###Output _____no_output_____ ###Markdown CLEANING-"day" ###Code df1['day'].isnull().sum() ###Output _____no_output_____ ###Markdown HISTPLOT FOR -"day" ###Code plt.hist(df1["day"],bins=[0,50,100,150,200],color='#abcdef') df1["day"].value_counts() df1=df1.drop(df1[df1['day'] <0].index) sns.distplot(df1["day"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1['day'].interpolate(method='quadratic', direction = 'both', inplace=True) df1.hour.isnull().sum() ###Output _____no_output_____ ###Markdown CLEANING FOR- "month" ###Code df1['month'].isnull().sum() df1["month"].value_counts() df1=df1.drop(df1[df1['month'] <0].index) df1 df1['month'].interpolate(method='polynomial',order=7, direction = 'both', inplace=True) df1.hour.isnull().sum() sns.distplot(df1["month"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) ###Output C:\ProgramData\Anaconda3\lib\site-packages\seaborn\distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) ###Markdown CLEANING -"year" ###Code df1.year.isnull().sum() df1["year"].value_counts() ###Output _____no_output_____ ###Markdown CONVERTING OBJECT TO NUMERIC DATA TYPE ###Code df1["year"]=pd.to_numeric(df1["year"]) df1["year"] ###Output _____no_output_____ ###Markdown DISTPLOT FOR -"year" ###Code sns.distplot(df1["year"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1['year'].interpolate(method='polynomial', order=5,direction = 'both', inplace=True) df1.hour.isnull().sum() sns.distplot(df1["year"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) ###Output C:\ProgramData\Anaconda3\lib\site-packages\seaborn\distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) ###Markdown CLEANING DONE DATA SHAPES AFTER CLEANING ###Code print("Original data shape=",df.shape) print("Cleaned data shape=",df1.shape) data=df.isnull().sum() data1=df1.isnull().sum() print("NULL COUNT IN ORIGINAL DATA=") print(data) print("\nNULL COUNT IN CLEANED DATA=") print(data1) ###Output NULL COUNT IN ORIGINAL DATA= year 15 month 6 day 20 hour 8 PM2.5 14 temperature 19 pressure 27 rain 12 wind_direction 60 wind_speed 25 dtype: int64 NULL COUNT IN CLEANED DATA= year 0 month 0 day 0 hour 0 PM2.5 0 temperature 0 pressure 0 rain 0 wind_direction 0 wind_speed 0 dtype: int64 ###Markdown DESCRIPTION OF CLEANED DATA ###Code df1.describe() df1.index = [i for i in range(1, 31521)] df1 from sklearn.preprocessing import LabelEncoder le=LabelEncoder() label=le.fit_transform(df1["wind_direction"]) le.classes_ df3=df1.drop(columns="wind_direction") df3["wind_direction"]=label ###Output _____no_output_____ ###Markdown df3 is cleaned data ###Code df3 df3.to_csv('cleaned.csv') ###Output _____no_output_____ ###Markdown CHECKING FOR COUNT OF NULL VALUES ###Code df.isnull().sum() df.isnull().sum()/df.shape[0]*100.0 df=df.drop(columns="Unnamed: 0") df.shape df1=df.copy() ###Output _____no_output_____ ###Markdown CLEANING -"wind_speed" DISTPLOT FOR - "wind_speed" ###Code sns.distplot(df["wind_speed"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1=df1.drop(df1[df1['wind_speed'] < 0].index) sns.distplot(df1["wind_speed"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1['wind_speed'].interpolate(method='polynomial',order=5, direction = 'both', inplace=True) df1["wind_speed"].isnull().sum() sns.distplot(df1["wind_speed"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) ###Output C:\ProgramData\Anaconda3\lib\site-packages\seaborn\distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) ###Markdown CLEANING -"wind_direction" FINDING THE MOST OCCURRING WIND DIRECTION ###Code index = df1["wind_direction"] index.value_counts() df1["wind_direction"]=df["wind_direction"].fillna("NE") df1["wind_direction"].isnull().sum() ###Output _____no_output_____ ###Markdown CLEANING - "rain" DISTPLOT FOR- "rain" ###Code sns.distplot(df1["rain"], bins=20, kde_kws={'linewidth':3, 'color':'#DC143C'}) #df1["rain"]=df["rain"].ffill() df1['rain'].interpolate(method='linear', direction = 'both', inplace=True) df1["rain"].isnull().sum() ###Output _____no_output_____ ###Markdown CLEANING -"pressure" CONVERTING OBJECT TO NUMERIC DATA TYPE ###Code df1["pressure"][0]="1023.7" df1["pressure"]=pd.to_numeric(df1["pressure"]) ###Output <ipython-input-21-36eff2814c0e>:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df1["pressure"][0]="1023.7" ###Markdown DISTPLOT FOR PRESSURE ###Code sns.distplot(df1["pressure"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1.drop(df1[df1['pressure'] < 0].index, inplace = True) sns.distplot(df1["pressure"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1['pressure'].interpolate(method='polynomial',order=5, direction = 'both', inplace=True) sns.distplot(df1["pressure"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1["pressure"].isnull().sum() ###Output _____no_output_____ ###Markdown CLEANING-"temperature" CONVERTING TEMPERATURE TO KELVIN ###Code df1["temperature"]=df1["temperature"]+273 df1["temperature"].isnull().sum() ###Output _____no_output_____ ###Markdown DISTPLOT FOR TEMPERATURE ###Code sns.distplot(df1["temperature"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1['temperature'].interpolate(method='polynomial',order=5, direction = 'both', inplace=True) df1["temperature"].isnull().sum() ###Output _____no_output_____ ###Markdown CLEANING-"PM2.5" ###Code df1["PM2.5"] df1["PM2.5"].isnull().sum() ###Output _____no_output_____ ###Markdown HISTPLOT FOR -"PM2.5" ###Code plt.hist(df1["PM2.5"],bins=[0,10000,20000,30000,40000],color='#abcdef') sns.distplot(df1["PM2.5"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1["PM2.5"].value_counts() df1['PM2.5'].interpolate(method='linear', direction = 'both', inplace=True) ###Output _____no_output_____ ###Markdown CLEANING-"hour" ###Code df1["PM2.5"].isnull().sum() ###Output _____no_output_____ ###Markdown DISTPLOT FOR-"hour" ###Code sns.distplot(df1["hour"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1=df1.drop(df1[df1['hour'] < 0].index) df1=df1.drop(df1[df1['hour'] > 23].index) sns.distplot(df1["hour"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1['hour'].interpolate(method='linear', direction = 'both', inplace=True) df1.hour.isnull().sum() sns.distplot(df1["hour"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) index = df1["hour"] index.value_counts() ###Output _____no_output_____ ###Markdown CLEANING-"day" ###Code df1['day'].isnull().sum() ###Output _____no_output_____ ###Markdown HISTPLOT FOR -"day" ###Code plt.hist(df1["day"],bins=[0,50,100,150,200],color='#abcdef') df1["day"].value_counts() df1=df1.drop(df1[df1['day'] <0].index) sns.distplot(df1["day"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1['day'].interpolate(method='quadratic', direction = 'both', inplace=True) df1.hour.isnull().sum() ###Output _____no_output_____ ###Markdown CLEANING FOR- "month" ###Code df1['month'].isnull().sum() df1["month"].value_counts() df1=df1.drop(df1[df1['month'] <0].index) df1 df1['month'].interpolate(method='polynomial',order=7, direction = 'both', inplace=True) df1.hour.isnull().sum() sns.distplot(df1["month"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) ###Output C:\ProgramData\Anaconda3\lib\site-packages\seaborn\distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) ###Markdown CLEANING -"year" ###Code df1.year.isnull().sum() df1["year"].value_counts() ###Output _____no_output_____ ###Markdown CONVERTING OBJECT TO NUMERIC DATA TYPE ###Code df1["year"]=pd.to_numeric(df1["year"]) df1["year"] ###Output _____no_output_____ ###Markdown DISTPLOT FOR -"year" ###Code sns.distplot(df1["year"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) df1['year'].interpolate(method='polynomial', order=5,direction = 'both', inplace=True) df1.hour.isnull().sum() sns.distplot(df1["year"], bins=20, kde_kws={'linewidth':5, 'color':'#DC143C'}) ###Output C:\ProgramData\Anaconda3\lib\site-packages\seaborn\distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning) ###Markdown CLEANING DONE DATA SHAPES AFTER CLEANING ###Code print("Original data shape=",df.shape) print("Cleaned data shape=",df1.shape) data=df.isnull().sum() data1=df1.isnull().sum() print("NULL COUNT IN ORIGINAL DATA=") print(data) print("\nNULL COUNT IN CLEANED DATA=") print(data1) ###Output NULL COUNT IN ORIGINAL DATA= year 15 month 6 day 20 hour 8 PM2.5 14 temperature 19 pressure 27 rain 12 wind_direction 60 wind_speed 25 dtype: int64 NULL COUNT IN CLEANED DATA= year 0 month 0 day 0 hour 0 PM2.5 0 temperature 0 pressure 0 rain 0 wind_direction 0 wind_speed 0 dtype: int64 ###Markdown DESCRIPTION OF CLEANED DATA ###Code df1.describe() df1.index = [i for i in range(1, 31521)] df1 from sklearn.preprocessing import LabelEncoder le=LabelEncoder() label=le.fit_transform(df1["wind_direction"]) le.classes_ df3=df1.drop(columns="wind_direction") df3["wind_direction"]=label ###Output _____no_output_____ ###Markdown df3 is cleaned data ###Code df3 df3.to_csv('cleaned.csv') ###Output _____no_output_____
tensorflow/examples/udacity/1_notmnist.ipynb
###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matlotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://commondatastorage.googleapis.com/books1000/' def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) for image_index, image in enumerate(image_files): image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[image_index, :, :] = image_data except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') num_images = image_index + 1 dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52909, 28, 28) Mean: -0.12848 Standard deviation: 0.425576 notMNIST_large/B Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.00755947 Standard deviation: 0.417272 notMNIST_large/C Full dataset tensor: (52912, 28, 28) Mean: -0.142321 Standard deviation: 0.421305 notMNIST_large/D Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.0574553 Standard deviation: 0.434072 notMNIST_large/E Full dataset tensor: (52912, 28, 28) Mean: -0.0701406 Standard deviation: 0.42882 notMNIST_large/F Full dataset tensor: (52912, 28, 28) Mean: -0.125914 Standard deviation: 0.429645 notMNIST_large/G Full dataset tensor: (52912, 28, 28) Mean: -0.0947771 Standard deviation: 0.421674 notMNIST_large/H Full dataset tensor: (52912, 28, 28) Mean: -0.0687667 Standard deviation: 0.430344 notMNIST_large/I Full dataset tensor: (52912, 28, 28) Mean: 0.0307405 Standard deviation: 0.449686 notMNIST_large/J Full dataset tensor: (52911, 28, 28) Mean: -0.153479 Standard deviation: 0.397169 notMNIST_small/A Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.132588 Standard deviation: 0.445923 notMNIST_small/B Full dataset tensor: (1873, 28, 28) Mean: 0.00535619 Standard deviation: 0.457054 notMNIST_small/C Full dataset tensor: (1873, 28, 28) Mean: -0.141489 Standard deviation: 0.441056 notMNIST_small/D Full dataset tensor: (1873, 28, 28) Mean: -0.0492094 Standard deviation: 0.460477 notMNIST_small/E Full dataset tensor: (1873, 28, 28) Mean: -0.0598952 Standard deviation: 0.456146 notMNIST_small/F Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.118148 Standard deviation: 0.451134 notMNIST_small/G Full dataset tensor: (1872, 28, 28) Mean: -0.092519 Standard deviation: 0.448468 notMNIST_small/H Full dataset tensor: (1872, 28, 28) Mean: -0.0586729 Standard deviation: 0.457387 notMNIST_small/I Full dataset tensor: (1872, 28, 28) Mean: 0.0526481 Standard deviation: 0.472657 notMNIST_small/J Full dataset tensor: (1872, 28, 28) Mean: -0.15167 Standard deviation: 0.449521 ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) Testing (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import imageio import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matplotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'https://commondatastorage.googleapis.com/books1000/' last_percent_reported = None data_root = '/Users/jakezidow/.keras/datasets' # Change me to store data elsewhere def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 5% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" dest_filename = os.path.join(data_root, filename) if force or not os.path.exists(dest_filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(dest_filename) if statinfo.st_size == expected_bytes: print('Found and verified', dest_filename) else: raise Exception( 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?') return dest_filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified /Users/jakezidow/.keras/datasets/notMNIST_large.tar.gz Found and verified /Users/jakezidow/.keras/datasets/notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labeled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall(data_root) tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output /Users/jakezidow/.keras/datasets/notMNIST_large already present - Skipping extraction of /Users/jakezidow/.keras/datasets/notMNIST_large.tar.gz. ['/Users/jakezidow/.keras/datasets/notMNIST_large/A', '/Users/jakezidow/.keras/datasets/notMNIST_large/B', '/Users/jakezidow/.keras/datasets/notMNIST_large/C', '/Users/jakezidow/.keras/datasets/notMNIST_large/D', '/Users/jakezidow/.keras/datasets/notMNIST_large/E', '/Users/jakezidow/.keras/datasets/notMNIST_large/F', '/Users/jakezidow/.keras/datasets/notMNIST_large/G', '/Users/jakezidow/.keras/datasets/notMNIST_large/H', '/Users/jakezidow/.keras/datasets/notMNIST_large/I', '/Users/jakezidow/.keras/datasets/notMNIST_large/J'] /Users/jakezidow/.keras/datasets/notMNIST_small already present - Skipping extraction of /Users/jakezidow/.keras/datasets/notMNIST_small.tar.gz. ['/Users/jakezidow/.keras/datasets/notMNIST_small/A', '/Users/jakezidow/.keras/datasets/notMNIST_small/B', '/Users/jakezidow/.keras/datasets/notMNIST_small/C', '/Users/jakezidow/.keras/datasets/notMNIST_small/D', '/Users/jakezidow/.keras/datasets/notMNIST_small/E', '/Users/jakezidow/.keras/datasets/notMNIST_small/F', '/Users/jakezidow/.keras/datasets/notMNIST_small/G', '/Users/jakezidow/.keras/datasets/notMNIST_small/H', '/Users/jakezidow/.keras/datasets/notMNIST_small/I', '/Users/jakezidow/.keras/datasets/notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- ###Code import csv import matplotlib.image as mpimg import random #display first image in each training folder for x in range(len(train_folders)): pics = os.listdir(train_folders[x]) pic_path = os.path.join(train_folders[x], pics[0]) i = Image(filename=pic_path) display(i) # img = mpimg.imread(pic_path) #Only prints out last picture # print("IMAGE SHAPE",img.shape) # print(img[0]) # imgplot = plt.imshow(img) ###Output _____no_output_____ ###Markdown Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (imageio.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except (IOError, ValueError) as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output /Users/jakezidow/.keras/datasets/notMNIST_large/A.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_large/B.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_large/C.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_large/D.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_large/E.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_large/F.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_large/G.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_large/H.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_large/I.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_large/J.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_small/A.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_small/B.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_small/C.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_small/D.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_small/E.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_small/F.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_small/G.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_small/H.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_small/I.pickle already present - Skipping pickling. /Users/jakezidow/.keras/datasets/notMNIST_small/J.pickle already present - Skipping pickling. ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ###Code # print(type(train_datasets)) # img = mpimg.imread(train_datasets[0]) #Only prints out last picture # print("IMAGE SHAPE",img.shape) # imgplot = plt.imshow(img) # hold on; #WTF-- forms online suggest this for pickle_file in train_datasets: #todo: FIGURE OUT HOW TO PLOT MULTIPLE GRAPHS IN FOR LOOP try: with open(pickle_file, 'rb') as file: pickle_dataset = pickle.load(file) except Exception as e: print('CANNOT READ :', pickle_file, '--',e) # return print("LABEL ",pickle_file.split('/')[-1]) dataset = list(pickle_dataset) # Shape: N x 28 x 28 plt.imshow(dataset[0]) ###Output LABEL A.pickle LABEL B.pickle LABEL C.pickle LABEL D.pickle LABEL E.pickle LABEL F.pickle LABEL G.pickle LABEL H.pickle LABEL I.pickle LABEL J.pickle ###Markdown ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- ###Code for pickle_file in train_datasets: #todo: FIGURE OUT HOW TO PLOT MULTIPLE GRAPHS IN FOR LOOP try: with open(pickle_file, 'rb') as file: pickle_dataset = pickle.load(file) except Exception as e: print('CANNOT READ :', pickle_file, '--',e) # return dataset = list(pickle_dataset) # Shape: N x 28 x 28 print("TRAINING ",pickle_file.split('/')[-1], "Contains ", len(dataset), "observations") for pickle_file in test_datasets: #todo: FIGURE OUT HOW TO PLOT MULTIPLE GRAPHS IN FOR LOOP try: with open(pickle_file, 'rb') as file: pickle_dataset = pickle.load(file) except Exception as e: print('CANNOT READ :', pickle_file, '--',e) # return dataset = list(pickle_dataset) # Shape: N x 28 x 28 print("TRAINING ",pickle_file.split('/')[-1], "Contains ", len(dataset), "observations") ###Output TRAINING A.pickle Contains 52909 observations TRAINING B.pickle Contains 52911 observations TRAINING C.pickle Contains 52912 observations TRAINING D.pickle Contains 52911 observations TRAINING E.pickle Contains 52912 observations TRAINING F.pickle Contains 52912 observations TRAINING G.pickle Contains 52912 observations TRAINING H.pickle Contains 52912 observations TRAINING I.pickle Contains 52912 observations TRAINING J.pickle Contains 52911 observations TRAINING A.pickle Contains 1872 observations TRAINING B.pickle Contains 1873 observations TRAINING C.pickle Contains 1873 observations TRAINING D.pickle Contains 1873 observations TRAINING E.pickle Contains 1873 observations TRAINING F.pickle Contains 1872 observations TRAINING G.pickle Contains 1872 observations TRAINING H.pickle Contains 1872 observations TRAINING I.pickle Contains 1872 observations TRAINING J.pickle Contains 1872 observations ###Markdown Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training: (200000, 28, 28) (200000,) Validation: (10000, 28, 28) (10000,) Testing: (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- ###Code import random letter_array = ['A','B','C','D','E','F','G', 'H', 'I', 'J'] random_training_index = random.sample(list(np.arange(train_dataset.shape[0])),1) print("label ",train_labels[random_training_index][0]) print("letter label",letter_array[train_labels[random_training_index][0]]) print("Image shape: ", train_dataset[random_training_index].shape) plt.imshow(train_dataset[random_training_index].reshape((28,28))) random_test_index = random.sample(list(np.arange(test_dataset.shape[0])),1) print("label ",test_labels[random_test_index]) print("letter label",letter_array[test_labels[random_test_index][0]]) print("Image shape: ", test_dataset[random_test_index].shape) plt.imshow(test_dataset[random_test_index].reshape((28,28))) ###Output label [5] letter label F Image shape: (1, 28, 28) ###Markdown Finally, let's save the data for later reuse: ###Code pickle_file = os.path.join(data_root, 'notMNIST.pickle') try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 690800506 ###Markdown ---Problem 5---------By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.Measure how much overlap there is between training, validation and test samples.Optional questions:- What about near duplicates between datasets? (images that are almost identical)- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.--- ---Problem 6---------Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.Optional question: train an off-the-shelf model on all the data!--- ###Code from sklearn.metrics import accuracy_score training_sizes = [50,100,1000,5000] print(train_dataset.shape) print(train_labels.shape) lr_test_set = test_dataset.reshape(test_dataset.shape[0], 784) for n in training_sizes: lr = LogisticRegression() training_indices = random.sample(list(np.arange(train_dataset.shape[0])), n) lr_train_set = np.zeros((n,28,28)) lr_train_labels = [] #Get train set and labels i = 0 for x in training_indices: lr_train_set[i] = train_dataset[x] lr_train_labels.append(train_labels[x]) i += 1 lr_train_set = lr_train_set.reshape(n, 784) lr.fit(lr_train_set, lr_train_labels) prediction = lr.predict(lr_test_set) print("Accuracy when N = ", n, " ====> ", accuracy_score(test_labels, prediction)) ###Output (200000, 28, 28) (200000,) Accuracy when N = 50 ====> 0.6712 Accuracy when N = 100 ====> 0.7724 Accuracy when N = 1000 ====> 0.8308 Accuracy when N = 5000 ====> 0.8517 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matplotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://commondatastorage.googleapis.com/books1000/' last_percent_reported = None def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 5% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52909, 28, 28) Mean: -0.12848 Standard deviation: 0.425576 notMNIST_large/B Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.00755947 Standard deviation: 0.417272 notMNIST_large/C Full dataset tensor: (52912, 28, 28) Mean: -0.142321 Standard deviation: 0.421305 notMNIST_large/D Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.0574553 Standard deviation: 0.434072 notMNIST_large/E Full dataset tensor: (52912, 28, 28) Mean: -0.0701406 Standard deviation: 0.42882 notMNIST_large/F Full dataset tensor: (52912, 28, 28) Mean: -0.125914 Standard deviation: 0.429645 notMNIST_large/G Full dataset tensor: (52912, 28, 28) Mean: -0.0947771 Standard deviation: 0.421674 notMNIST_large/H Full dataset tensor: (52912, 28, 28) Mean: -0.0687667 Standard deviation: 0.430344 notMNIST_large/I Full dataset tensor: (52912, 28, 28) Mean: 0.0307405 Standard deviation: 0.449686 notMNIST_large/J Full dataset tensor: (52911, 28, 28) Mean: -0.153479 Standard deviation: 0.397169 notMNIST_small/A Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.132588 Standard deviation: 0.445923 notMNIST_small/B Full dataset tensor: (1873, 28, 28) Mean: 0.00535619 Standard deviation: 0.457054 notMNIST_small/C Full dataset tensor: (1873, 28, 28) Mean: -0.141489 Standard deviation: 0.441056 notMNIST_small/D Full dataset tensor: (1873, 28, 28) Mean: -0.0492094 Standard deviation: 0.460477 notMNIST_small/E Full dataset tensor: (1873, 28, 28) Mean: -0.0598952 Standard deviation: 0.456146 notMNIST_small/F Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.118148 Standard deviation: 0.451134 notMNIST_small/G Full dataset tensor: (1872, 28, 28) Mean: -0.092519 Standard deviation: 0.448468 notMNIST_small/H Full dataset tensor: (1872, 28, 28) Mean: -0.0586729 Standard deviation: 0.457387 notMNIST_small/I Full dataset tensor: (1872, 28, 28) Mean: 0.0526481 Standard deviation: 0.472657 notMNIST_small/J Full dataset tensor: (1872, 28, 28) Mean: -0.15167 Standard deviation: 0.449521 ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) Testing (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matlotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://commondatastorage.googleapis.com/books1000/' last_percent_reported = None def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 1% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output notMNIST_large already present - Skipping extraction of notMNIST_large.tar.gz. ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] notMNIST_small already present - Skipping extraction of notMNIST_small.tar.gz. ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A.pickle already present - Skipping pickling. notMNIST_large/B.pickle already present - Skipping pickling. notMNIST_large/C.pickle already present - Skipping pickling. notMNIST_large/D.pickle already present - Skipping pickling. notMNIST_large/E.pickle already present - Skipping pickling. notMNIST_large/F.pickle already present - Skipping pickling. notMNIST_large/G.pickle already present - Skipping pickling. notMNIST_large/H.pickle already present - Skipping pickling. notMNIST_large/I.pickle already present - Skipping pickling. notMNIST_large/J.pickle already present - Skipping pickling. notMNIST_small/A.pickle already present - Skipping pickling. notMNIST_small/B.pickle already present - Skipping pickling. notMNIST_small/C.pickle already present - Skipping pickling. notMNIST_small/D.pickle already present - Skipping pickling. notMNIST_small/E.pickle already present - Skipping pickling. notMNIST_small/F.pickle already present - Skipping pickling. notMNIST_small/G.pickle already present - Skipping pickling. notMNIST_small/H.pickle already present - Skipping pickling. notMNIST_small/I.pickle already present - Skipping pickling. notMNIST_small/J.pickle already present - Skipping pickling. ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ###Code def visual_validation(pickled_dataset): samples_per_class=3 sample_classes=len(pickled_dataset) print(sample_classes) fig, plt_axes_arr=plt.subplots(sample_classes, samples_per_class) for i, pickle_file in enumerate(pickled_dataset): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) sample_index = np.random.choice(letter_set.shape[0],samples_per_class) for j,idx in enumerate(sample_index): plt_axes_arr[i,j].imshow(np.rot90(letter_set[idx],1), cmap='Greys') plt_axes_arr[i,j].axis('off') except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise plt.show() visual_validation(train_datasets) ###Output 10 ###Markdown ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- ###Code def count_labels(dataset): class_sample_arr = np.ndarray(len(dataset), dtype=np.int32) for i, pickle_file in enumerate(dataset): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) class_sample_arr[i] = letter_set.shape[0] except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return class_sample_arr train_labels_count=count_labels(train_datasets) test_labels_count=count_labels(test_datasets) train_std=train_labels_count.std() test_std=test_labels_count.std() print("Train: ",train_labels_count, ", std: ",train_std) print("Test: ",test_labels_count, ", std: ",test_std) ###Output Train: [52909 52911 52912 52911 52912 52912 52912 52912 52912 52911] , std: 0.916515138991 Test: [1872 1873 1873 1873 1873 1872 1872 1872 1872 1872] , std: 0.489897948557 ###Markdown Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class #np.random.shuffle(letter_set) train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 65536 test_size = 18000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training: (200000, 28, 28) (200000,) Validation: (65536, 28, 28) (65536,) Testing: (18000, 28, 28) (18000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- ###Code def visual_confirmation(train_dataset, train_labels): fig, plt_axes_arr=plt.subplots(5, 5) for i in range(5): sample_index = np.random.choice(train_dataset.shape[0],5) print("sample_index: ", sample_index) print("sample labels:", train_labels[sample_index]) for j,idx in enumerate(sample_index): plt_axes_arr[i,j].imshow(train_dataset[idx],cmap='Greys') plt_axes_arr[i,j].axis('off') plt.show() print("Train Dataset:") visual_confirmation(train_dataset, train_labels) print("Valid Dataset:") visual_confirmation(valid_dataset, valid_labels) ###Output Train Dataset: sample_index: [144902 34509 89830 150408 107960] sample labels: [6 4 8 5 8] sample_index: [128342 175350 170357 187935 131273] sample labels: [2 3 2 2 0] sample_index: [16367 6236 31727 78508 66699] sample labels: [7 2 7 0 5] sample_index: [ 24201 181224 56381 39402 98331] sample labels: [2 3 9 9 8] sample_index: [ 69107 11432 35141 33660 153501] sample labels: [8 9 5 3 4] ###Markdown Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 890303485 ###Markdown ---Problem 5---------By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.Measure how much overlap there is between training, validation and test samples.Optional questions:- What about near duplicates between datasets? (images that are almost identical)- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.--- ###Code import time start_time = time.time() train_dataset_flat=train_dataset.reshape(200000,784) train_dataset_flat_plus=np.zeros((200000,785)) train_dataset_flat_plus[:,:-1]=train_dataset_flat train_dataset_flat_plus[:,784]=train_labels valid_dataset_flat=valid_dataset.reshape(65536,784) valid_dataset_flat_plus=np.zeros((65536,785)) valid_dataset_flat_plus[:,:-1]=valid_dataset_flat valid_dataset_flat_plus[:,784]=valid_labels test_dataset_flat=test_dataset.reshape(18000,784) test_dataset_flat_plus=np.zeros((18000,785)) test_dataset_flat_plus[:,:-1]=test_dataset_flat test_dataset_flat_plus[:,784]=test_labels trainset = set([tuple(x) for x in train_dataset_flat_plus]) validset = set([tuple(x) for x in valid_dataset_flat_plus]) testset = set([tuple(x) for x in test_dataset_flat_plus]) train_clean_arr=np.array([x for x in trainset if x not in validset if x not in testset]) valid_no_test_arr=np.array([x for x in validset if x not in testset]) test_arr=np.array([x for x in testset]) print("Time taken to remove overlaps in time:\n- %s seconds ---" % (time.time() - start_time)) print("Clean Train size: ",train_clean_arr.shape[0]) print("Clean Valid size: ",valid_no_test_arr.shape[0]) print("Clean Test size: ",test_arr.shape[0]) train_cleaned_labels=train_clean_arr[:,784] train_cleaned_data=train_clean_arr[:,0:784].reshape(179817,28,28) valid_no_test_arr_labels=valid_no_test_arr[:,784] valid_no_test_arr_data=valid_no_test_arr[:,0:784].reshape(62958,28,28) test_arr_labels=test_arr[:,784] test_arr_data=test_arr[:,0:784].reshape(17540,28,28) print(train_cleaned_data.shape) print(valid_no_test_arr_data.shape) print(test_arr_data.shape) def visual_confirmation(train_dataset, train_labels): fig, plt_axes_arr=plt.subplots(5, 5) for i in range(5): sample_index = np.random.choice(train_dataset.shape[0],5) print("sample_index: ", sample_index) print("sample labels:", train_labels[sample_index]) for j,idx in enumerate(sample_index): plt_axes_arr[i,j].imshow(train_dataset[idx],cmap='Greys') plt_axes_arr[i,j].axis('off') plt.show() print("Train Dataset:") visual_confirmation(train_dataset, train_labels) pickle_file = 'cleanMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_cleaned_data, 'train_labels': train_cleaned_labels, 'valid_dataset': valid_no_test_arr_data, 'valid_labels': valid_no_test_arr_labels, 'test_dataset': test_arr_data, 'test_labels': test_arr_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise ###Output _____no_output_____ ###Markdown Optional questions: What about near duplicates between datasets? (images that are almost identical) Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments. ---Problem 6---------Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.Optional question: train an off-the-shelf model on all the data!--- ###Code from sklearn import linear_model from sklearn.metrics import accuracy_score from sklearn import grid_search clf = linear_model.LogisticRegression(C=0.1, penalty='l2',solver='liblinear',max_iter=150,intercept_scaling=2,multi_class='ovr',n_jobs=3) ###parameters = {'loss':('hinge', 'modified_huber','log'), 'n_jobs':[1],'verbose'=(0)} ###clf = grid_search.GridSearchCV(sgd_clf, parameters) clf.fit(train_clean_arr[:,:-1], train_clean_arr[:,784]) pred=clf.predict(test_arr[:,:-1]) actual=test_arr[:,784] print("Prediction Accuracy on Test Data: ",accuracy_score(actual, pred)) ###print("Best Estimator: \n",clf.best_estimator_) ###Output _____no_output_____ ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matplotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://commondatastorage.googleapis.com/books1000/' last_percent_reported = None data_root = '.' # Change me to store data elsewhere def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 5% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" dest_filename = os.path.join(data_root, filename) if force or not os.path.exists(dest_filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(dest_filename) if statinfo.st_size == expected_bytes: print('Found and verified', dest_filename) else: raise Exception( 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?') return dest_filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall(data_root) tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52909, 28, 28) Mean: -0.12848 Standard deviation: 0.425576 notMNIST_large/B Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.00755947 Standard deviation: 0.417272 notMNIST_large/C Full dataset tensor: (52912, 28, 28) Mean: -0.142321 Standard deviation: 0.421305 notMNIST_large/D Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.0574553 Standard deviation: 0.434072 notMNIST_large/E Full dataset tensor: (52912, 28, 28) Mean: -0.0701406 Standard deviation: 0.42882 notMNIST_large/F Full dataset tensor: (52912, 28, 28) Mean: -0.125914 Standard deviation: 0.429645 notMNIST_large/G Full dataset tensor: (52912, 28, 28) Mean: -0.0947771 Standard deviation: 0.421674 notMNIST_large/H Full dataset tensor: (52912, 28, 28) Mean: -0.0687667 Standard deviation: 0.430344 notMNIST_large/I Full dataset tensor: (52912, 28, 28) Mean: 0.0307405 Standard deviation: 0.449686 notMNIST_large/J Full dataset tensor: (52911, 28, 28) Mean: -0.153479 Standard deviation: 0.397169 notMNIST_small/A Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.132588 Standard deviation: 0.445923 notMNIST_small/B Full dataset tensor: (1873, 28, 28) Mean: 0.00535619 Standard deviation: 0.457054 notMNIST_small/C Full dataset tensor: (1873, 28, 28) Mean: -0.141489 Standard deviation: 0.441056 notMNIST_small/D Full dataset tensor: (1873, 28, 28) Mean: -0.0492094 Standard deviation: 0.460477 notMNIST_small/E Full dataset tensor: (1873, 28, 28) Mean: -0.0598952 Standard deviation: 0.456146 notMNIST_small/F Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.118148 Standard deviation: 0.451134 notMNIST_small/G Full dataset tensor: (1872, 28, 28) Mean: -0.092519 Standard deviation: 0.448468 notMNIST_small/H Full dataset tensor: (1872, 28, 28) Mean: -0.0586729 Standard deviation: 0.457387 notMNIST_small/I Full dataset tensor: (1872, 28, 28) Mean: 0.0526481 Standard deviation: 0.472657 notMNIST_small/J Full dataset tensor: (1872, 28, 28) Mean: -0.15167 Standard deviation: 0.449521 ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) Testing (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = os.path.join(data_root, 'notMNIST.pickle') try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matplotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'https://commondatastorage.googleapis.com/books1000/' last_percent_reported = None data_root = '.' # Change me to store data elsewhere def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 5% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" dest_filename = os.path.join(data_root, filename) if force or not os.path.exists(dest_filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(dest_filename) if statinfo.st_size == expected_bytes: print('Found and verified', dest_filename) else: raise Exception( 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?') return dest_filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall(data_root) tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52909, 28, 28) Mean: -0.12848 Standard deviation: 0.425576 notMNIST_large/B Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.00755947 Standard deviation: 0.417272 notMNIST_large/C Full dataset tensor: (52912, 28, 28) Mean: -0.142321 Standard deviation: 0.421305 notMNIST_large/D Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.0574553 Standard deviation: 0.434072 notMNIST_large/E Full dataset tensor: (52912, 28, 28) Mean: -0.0701406 Standard deviation: 0.42882 notMNIST_large/F Full dataset tensor: (52912, 28, 28) Mean: -0.125914 Standard deviation: 0.429645 notMNIST_large/G Full dataset tensor: (52912, 28, 28) Mean: -0.0947771 Standard deviation: 0.421674 notMNIST_large/H Full dataset tensor: (52912, 28, 28) Mean: -0.0687667 Standard deviation: 0.430344 notMNIST_large/I Full dataset tensor: (52912, 28, 28) Mean: 0.0307405 Standard deviation: 0.449686 notMNIST_large/J Full dataset tensor: (52911, 28, 28) Mean: -0.153479 Standard deviation: 0.397169 notMNIST_small/A Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.132588 Standard deviation: 0.445923 notMNIST_small/B Full dataset tensor: (1873, 28, 28) Mean: 0.00535619 Standard deviation: 0.457054 notMNIST_small/C Full dataset tensor: (1873, 28, 28) Mean: -0.141489 Standard deviation: 0.441056 notMNIST_small/D Full dataset tensor: (1873, 28, 28) Mean: -0.0492094 Standard deviation: 0.460477 notMNIST_small/E Full dataset tensor: (1873, 28, 28) Mean: -0.0598952 Standard deviation: 0.456146 notMNIST_small/F Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.118148 Standard deviation: 0.451134 notMNIST_small/G Full dataset tensor: (1872, 28, 28) Mean: -0.092519 Standard deviation: 0.448468 notMNIST_small/H Full dataset tensor: (1872, 28, 28) Mean: -0.0586729 Standard deviation: 0.457387 notMNIST_small/I Full dataset tensor: (1872, 28, 28) Mean: 0.0526481 Standard deviation: 0.472657 notMNIST_small/J Full dataset tensor: (1872, 28, 28) Mean: -0.15167 Standard deviation: 0.449521 ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) Testing (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = os.path.join(data_root, 'notMNIST.pickle') try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matlotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://commondatastorage.googleapis.com/books1000/' last_percent_reported = None def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 1% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Attempting to download: notMNIST_large.tar.gz 0%....5%....10%....15%....20%....25%....30%....35%....40%....45%....50%....55%....60%....65%....70%....75%....80%....85%....90%....95%....100% Download Complete! Found and verified notMNIST_large.tar.gz Attempting to download: notMNIST_small.tar.gz 0%....5%....10%....15%....20%....25%....30%....35%....40%....45%....50%....55%....60%....65%....70%....75%....80%....85%....90%....95%....100% Download Complete! Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output Extracting data for notMNIST_large. This may take a while. Please wait. ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] Extracting data for notMNIST_small. This may take a while. Please wait. ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.![A1](notMNIST_large/A/ISBKYW1pcm9xdWFpICEudHRm.png "A1")![A2](notMNIST_large/A/IVNrZXRjaHkgVGltZXMgQm9sZC50dGY=.png "A2")![A3](notMNIST_large/A/IVkyS0JVRy50dGY=.png "A3")--- ###Code Image(filename='notMNIST_large/A/ISBKYW1pcm9xdWFpICEudHRm.png') Image(filename='notMNIST_large/A/IVNrZXRjaHkgVGltZXMgQm9sZC50dGY=.png') Image(filename='notMNIST_large/A/IVkyS0JVRy50dGY=.png') ###Output _____no_output_____ ###Markdown Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) for image_index, image in enumerate(image_files): image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[image_index, :, :] = image_data except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') num_images = image_index + 1 dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output Pickling notMNIST_large/A.pickle. notMNIST_large/A ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) Testing (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import imageio import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matplotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'https://commondatastorage.googleapis.com/books1000/' last_percent_reported = None data_root = '.' # Change me to store data elsewhere def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 5% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" dest_filename = os.path.join(data_root, filename) if force or not os.path.exists(dest_filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(dest_filename) if statinfo.st_size == expected_bytes: print('Found and verified', dest_filename) else: raise Exception( 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?') return dest_filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Attempting to download: notMNIST_large.tar.gz 0%....5%....10%....15%....20%....25%....30%....35%....40%....45%....50%....55%....60%....65%....70%....75%....80%....85%....90%....95%....100% Download Complete! Found and verified .\notMNIST_large.tar.gz Attempting to download: notMNIST_small.tar.gz 0%....5%....10%....15%....20%....25%....30%....35%....40%....45%....50%....55%....60%....65%....70%....75%....80%....85%....90%....95%....100% Download Complete! Found and verified .\notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labeled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall(data_root) tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output Extracting data for .\notMNIST_large. This may take a while. Please wait. ['.\\notMNIST_large\\A', '.\\notMNIST_large\\B', '.\\notMNIST_large\\C', '.\\notMNIST_large\\D', '.\\notMNIST_large\\E', '.\\notMNIST_large\\F', '.\\notMNIST_large\\G', '.\\notMNIST_large\\H', '.\\notMNIST_large\\I', '.\\notMNIST_large\\J'] Extracting data for .\notMNIST_small. This may take a while. Please wait. ['.\\notMNIST_small\\A', '.\\notMNIST_small\\B', '.\\notMNIST_small\\C', '.\\notMNIST_small\\D', '.\\notMNIST_small\\E', '.\\notMNIST_small\\F', '.\\notMNIST_small\\G', '.\\notMNIST_small\\H', '.\\notMNIST_small\\I', '.\\notMNIST_small\\J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- ###Code from IPython.display import Image import os for e in train_folders: display(Image(e+"\\"+os.listdir(e)[0])) for e in test_folders: display(Image(e+"\\"+os.listdir(e)[0])) ###Output _____no_output_____ ###Markdown Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (imageio.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except (IOError, ValueError) as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output Pickling .\notMNIST_large\A.pickle. .\notMNIST_large\A Could not read: .\notMNIST_large\A\RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : Could not find a format to read the specified file in mode 'i' - it's ok, skipping. Could not read: .\notMNIST_large\A\SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : Could not find a format to read the specified file in mode 'i' - it's ok, skipping. Could not read: .\notMNIST_large\A\Um9tYW5hIEJvbGQucGZi.png : Could not find a format to read the specified file in mode 'i' - it's ok, skipping. Full dataset tensor: (52909, 28, 28) Mean: -0.12825024 Standard deviation: 0.44312063 Pickling .\notMNIST_large\B.pickle. .\notMNIST_large\B Could not read: .\notMNIST_large\B\TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : Could not find a format to read the specified file in mode 'i' - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.0075630303 Standard deviation: 0.45449105 Pickling .\notMNIST_large\C.pickle. .\notMNIST_large\C Full dataset tensor: (52912, 28, 28) Mean: -0.14225811 Standard deviation: 0.43980625 Pickling .\notMNIST_large\D.pickle. .\notMNIST_large\D Could not read: .\notMNIST_large\D\VHJhbnNpdCBCb2xkLnR0Zg==.png : Could not find a format to read the specified file in mode 'i' - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.057367794 Standard deviation: 0.45564765 Pickling .\notMNIST_large\E.pickle. .\notMNIST_large\E Full dataset tensor: (52912, 28, 28) Mean: -0.06989899 Standard deviation: 0.45294195 Pickling .\notMNIST_large\F.pickle. .\notMNIST_large\F Full dataset tensor: (52912, 28, 28) Mean: -0.1255833 Standard deviation: 0.44708964 Pickling .\notMNIST_large\G.pickle. .\notMNIST_large\G Full dataset tensor: (52912, 28, 28) Mean: -0.09458135 Standard deviation: 0.44623983 Pickling .\notMNIST_large\H.pickle. .\notMNIST_large\H Full dataset tensor: (52912, 28, 28) Mean: -0.06852206 Standard deviation: 0.45423177 Pickling .\notMNIST_large\I.pickle. .\notMNIST_large\I Full dataset tensor: (52912, 28, 28) Mean: 0.03078625 Standard deviation: 0.46889907 Pickling .\notMNIST_large\J.pickle. .\notMNIST_large\J Full dataset tensor: (52911, 28, 28) Mean: -0.15335836 Standard deviation: 0.44365644 Pickling .\notMNIST_small\A.pickle. .\notMNIST_small\A Could not read: .\notMNIST_small\A\RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : Could not find a format to read the specified file in mode 'i' - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.13262637 Standard deviation: 0.44512793 Pickling .\notMNIST_small\B.pickle. .\notMNIST_small\B Full dataset tensor: (1873, 28, 28) Mean: 0.005356085 Standard deviation: 0.45711532 Pickling .\notMNIST_small\C.pickle. .\notMNIST_small\C Full dataset tensor: (1873, 28, 28) Mean: -0.1415206 Standard deviation: 0.4426903 Pickling .\notMNIST_small\D.pickle. .\notMNIST_small\D Full dataset tensor: (1873, 28, 28) Mean: -0.04921666 Standard deviation: 0.4597589 Pickling .\notMNIST_small\E.pickle. .\notMNIST_small\E Full dataset tensor: (1873, 28, 28) Mean: -0.05991479 Standard deviation: 0.45734963 Pickling .\notMNIST_small\F.pickle. .\notMNIST_small\F Could not read: .\notMNIST_small\F\Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : Could not find a format to read the specified file in mode 'i' - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.118185304 Standard deviation: 0.45227867 Pickling .\notMNIST_small\G.pickle. .\notMNIST_small\G Full dataset tensor: (1872, 28, 28) Mean: -0.09255028 Standard deviation: 0.44900584 Pickling .\notMNIST_small\H.pickle. .\notMNIST_small\H Full dataset tensor: (1872, 28, 28) Mean: -0.05868925 Standard deviation: 0.45875895 Pickling .\notMNIST_small\I.pickle. .\notMNIST_small\I Full dataset tensor: (1872, 28, 28) Mean: 0.05264507 Standard deviation: 0.47189355 Pickling .\notMNIST_small\J.pickle. .\notMNIST_small\J Full dataset tensor: (1872, 28, 28) Mean: -0.15168911 Standard deviation: 0.44801357 ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ###Code fig=plt.figure() columns = 5 rows = 2 for i in range(len(train_datasets)): figs_from_data = pickle.load(open(train_datasets[i], 'rb')) fig.add_subplot(rows, columns, i+1) plt.imshow(figs_from_data[2200], cmap='gray') plt.show() ###Output _____no_output_____ ###Markdown ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- ###Code # Just need to check that in each pickle are about the same number of elements for i in range(len(train_datasets)): figs_from_data = pickle.load(open(train_datasets[i], 'rb')) print(len(figs_from_data)) ###Output 52909 52911 52912 52911 52912 52912 52912 52912 52912 52911 ###Markdown Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training: (200000, 28, 28) (200000,) Validation: (10000, 28, 28) (10000,) Testing: (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- ###Code fig=plt.figure() columns = 5 rows = 2 for i in range(10): figs_from_data = train_dataset[i] fig.add_subplot(rows, columns, i+1) plt.imshow(figs_from_data, cmap='gray') print(train_labels[i]) plt.show() ###Output 4 9 6 2 7 3 5 9 6 4 ###Markdown Finally, let's save the data for later reuse: ###Code pickle_file = os.path.join(data_root, 'notMNIST.pickle') try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 690800506 ###Markdown ---Problem 5---------By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.Measure how much overlap there is between training, validation and test samples.Optional questions:- What about near duplicates between datasets? (images that are almost identical)- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.--- ###Code test_uniq = len(np.unique(test_dataset, axis=0)) train_uniq = len(np.unique(train_dataset, axis=0)) comb_uniq = len(np.unique(np.concatenate((test_dataset, train_dataset), axis=0), axis=0)) print(test_uniq + train_uniq - comb_uniq) from sklearn.model_selection import train_test_split # TODO ###Output _____no_output_____ ###Markdown ---Problem 6---------Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.Optional question: train an off-the-shelf model on all the data!--- ###Code from sklearn.linear_model import LogisticRegression log_clf = LogisticRegression(random_state=42, solver='lbfgs', multi_class='auto', max_iter=500) n1 = 5000 # https://stackoverflow.com/questions/34972142/sklearn-logistic-regression-valueerror-found-array-with-dim-3-estimator-expec nsamples, nx, ny = train_dataset.shape d2_train_dataset = train_dataset.reshape((nsamples,nx*ny)) log_clf.fit(d2_train_dataset[:n1], train_labels[:n1]) # Prediction from sklearn.metrics import accuracy_score nsamples, nx, ny = test_dataset.shape d2_test_dataset = test_dataset.reshape((nsamples,nx*ny)) y_pred = log_clf.predict(d2_test_dataset) print(y_pred[:20]) print(test_labels[:20]) accuracy_score(test_labels, y_pred) nsamples, nx, ny = test_dataset.shape d2_valid_dataset = valid_dataset.reshape((nsamples,nx*ny)) y_pred = log_clf.predict(d2_valid_dataset) print(y_pred[:20]) print(valid_labels[:20]) accuracy_score(valid_labels, y_pred) ###Output [5 9 3 8 9 5 9 7 0 0 3 4 9 2 4 2 4 4 7 1] [1 9 3 8 9 3 9 7 0 7 3 4 9 2 4 2 4 2 7 1] ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matplotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'https://commondatastorage.googleapis.com/books1000/' last_percent_reported = None data_root = '.' # Change me to store data elsewhere def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 5% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" dest_filename = os.path.join(data_root, filename) if force or not os.path.exists(dest_filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(dest_filename) if statinfo.st_size == expected_bytes: print('Found and verified', dest_filename) else: raise Exception( 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?') return dest_filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified ./notMNIST_large.tar.gz Found and verified ./notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labeled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall(data_root) tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output ./notMNIST_large already present - Skipping extraction of ./notMNIST_large.tar.gz. ['./notMNIST_large/A', './notMNIST_large/B', './notMNIST_large/C', './notMNIST_large/D', './notMNIST_large/E', './notMNIST_large/F', './notMNIST_large/G', './notMNIST_large/H', './notMNIST_large/I', './notMNIST_large/J'] ./notMNIST_small already present - Skipping extraction of ./notMNIST_small.tar.gz. ['./notMNIST_small/A', './notMNIST_small/B', './notMNIST_small/C', './notMNIST_small/D', './notMNIST_small/E', './notMNIST_small/F', './notMNIST_small/G', './notMNIST_small/H', './notMNIST_small/I', './notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- ###Code import random #import sys #from IPython.display import Image def print_sample_files_from_sub_folders(root_folder): for sub_folder in root_folder: abs_path_sub_folder = os.path.abspath(sub_folder) random_file = random.choice(os.listdir(sub_folder)) file_path = os.path.join(abs_path_sub_folder, random_file) display(Image(filename=file_path)) print('displaying random images from train_folders') print_sample_files_from_sub_folders(train_folders) print('displaying random images from test_folders') print_sample_files_from_sub_folders(test_folders) ###Output displaying random images from train_folders ###Markdown Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output ./notMNIST_large/A.pickle already present - Skipping pickling. ./notMNIST_large/B.pickle already present - Skipping pickling. ./notMNIST_large/C.pickle already present - Skipping pickling. ./notMNIST_large/D.pickle already present - Skipping pickling. ./notMNIST_large/E.pickle already present - Skipping pickling. ./notMNIST_large/F.pickle already present - Skipping pickling. ./notMNIST_large/G.pickle already present - Skipping pickling. ./notMNIST_large/H.pickle already present - Skipping pickling. ./notMNIST_large/I.pickle already present - Skipping pickling. ./notMNIST_large/J.pickle already present - Skipping pickling. ./notMNIST_small/A.pickle already present - Skipping pickling. ./notMNIST_small/B.pickle already present - Skipping pickling. ./notMNIST_small/C.pickle already present - Skipping pickling. ./notMNIST_small/D.pickle already present - Skipping pickling. ./notMNIST_small/E.pickle already present - Skipping pickling. ./notMNIST_small/F.pickle already present - Skipping pickling. ./notMNIST_small/G.pickle already present - Skipping pickling. ./notMNIST_small/H.pickle already present - Skipping pickling. ./notMNIST_small/I.pickle already present - Skipping pickling. ./notMNIST_small/J.pickle already present - Skipping pickling. ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ###Code %matplotlib inline def show_samples(datasets): number_plots = 1 for dataset in datasets: with open(dataset, 'rb') as file: data = pickle.load(file) plt.figure() for iterate in range(number_plots): plt.subplot(1, number_plots, iterate + 1) plt.axis('off') plt.imshow(data[0]) show_samples(train_datasets) show_samples(test_datasets) ###Output _____no_output_____ ###Markdown ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- ###Code def verify_shape(datasets): for dataset in datasets: with (open(dataset, "rb")) as openfile: try: data = pickle.load(openfile) print(dataset, data.shape) except EOFError: break verify_shape(train_datasets) verify_shape(test_datasets) ###Output ./notMNIST_large/A.pickle (52909, 28, 28) ./notMNIST_large/B.pickle (52911, 28, 28) ./notMNIST_large/C.pickle (52912, 28, 28) ./notMNIST_large/D.pickle (52911, 28, 28) ./notMNIST_large/E.pickle (52912, 28, 28) ./notMNIST_large/F.pickle (52912, 28, 28) ./notMNIST_large/G.pickle (52912, 28, 28) ./notMNIST_large/H.pickle (52912, 28, 28) ./notMNIST_large/I.pickle (52912, 28, 28) ./notMNIST_large/J.pickle (52911, 28, 28) ./notMNIST_small/A.pickle (1872, 28, 28) ./notMNIST_small/B.pickle (1873, 28, 28) ./notMNIST_small/C.pickle (1873, 28, 28) ./notMNIST_small/D.pickle (1873, 28, 28) ./notMNIST_small/E.pickle (1873, 28, 28) ./notMNIST_small/F.pickle (1872, 28, 28) ./notMNIST_small/G.pickle (1872, 28, 28) ./notMNIST_small/H.pickle (1872, 28, 28) ./notMNIST_small/I.pickle (1872, 28, 28) ./notMNIST_small/J.pickle (1872, 28, 28) ###Markdown Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training: (200000, 28, 28) (200000,) Validation: (10000, 28, 28) (10000,) Testing: (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- ###Code def verify_after_shuffling(dataset, labels): character_set = [chr(i) for i in range(ord('A'), ord('Z')+1)] number_plots = 10 random_samples = random.sample(range(len(labels)), number_plots) plt.figure() for iterate in range(number_plots): plt.subplot(1, number_plots, iterate + 1) plt.axis('off') plt.title(character_set[labels[random_samples[iterate]]]) plt.imshow(dataset[random_samples[iterate]]) verify_after_shuffling(test_dataset, test_labels) verify_after_shuffling(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown Finally, let's save the data for later reuse: ###Code pickle_file = os.path.join(data_root, 'notMNIST.pickle') try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 690800503 ###Markdown ---Problem 5---------By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.Measure how much overlap there is between training, validation and test samples.Optional questions:- What about near duplicates between datasets? (images that are almost identical)- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.--- ###Code # Attempted and didnt work # 1. comparring as ndarray shows wrong anser # 2. convert ndarrays to list and compared as list. TypeError: unhashable type: 'list' # 3. convert ndarrays to list of string values for comparison def convert_to_list_of_string_values(dataset): dataset_list = [] for i in range(dataset.shape[0]): dataset_list.append(dataset[i].tostring()) return dataset_list def find_intersection_ndarrays(dataset1, dataset2): dataset_list1 = convert_to_list_of_string_values(dataset1) dataset_list2 = convert_to_list_of_string_values(dataset2) list_intersection = set(dataset_list1).intersection(dataset_list2) return len(list_intersection) print('number of intersections between train dataset and test dataset: ', find_intersection_ndarrays(train_dataset, test_dataset)) print('number of intersections between train dataset and valid dataset: ', find_intersection_ndarrays(train_dataset, valid_dataset)) print('number of intersections between test dataset and valid dataset: ', find_intersection_ndarrays(test_dataset, valid_dataset)) ###Output number of intersections between train dataset and test dataset: 1113 number of intersections between train dataset and valid dataset: 976 number of intersections between test dataset and valid dataset: 71 ###Markdown ---Problem 6---------Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.Optional question: train an off-the-shelf model on all the data!--- ###Code logits = LogisticRegression() num_samples = [50, 100, 1000, 5000] scores = np.zeros(4) for i, n in enumerate(num_samples): logits.fit(train_dataset[:n].reshape(n, -1), train_labels[:n]) score = logits.score(test_dataset.reshape(len(test_dataset), -1), test_labels) print("LinearRegressionModel iteration %d, with samples %d, scores %f" % (i, n, score)) ###Output LinearRegressionModel iteration 0, with samples 50, scores 0.668600 LinearRegressionModel iteration 1, with samples 100, scores 0.728200 LinearRegressionModel iteration 2, with samples 1000, scores 0.825600 LinearRegressionModel iteration 3, with samples 5000, scores 0.847200 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matplotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://commondatastorage.googleapis.com/books1000/' last_percent_reported = None data_root = '.' # Change me to store data elsewhere def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 5% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" dest_filename = os.path.join(data_root, filename) if force or not os.path.exists(dest_filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(dest_filename) if statinfo.st_size == expected_bytes: print('Found and verified', dest_filename) else: raise Exception( 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?') return dest_filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified ./notMNIST_large.tar.gz Found and verified ./notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall(data_root) tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output ./notMNIST_large already present - Skipping extraction of ./notMNIST_large.tar.gz. ['./notMNIST_large/A', './notMNIST_large/B', './notMNIST_large/C', './notMNIST_large/D', './notMNIST_large/E', './notMNIST_large/F', './notMNIST_large/G', './notMNIST_large/H', './notMNIST_large/I', './notMNIST_large/J'] ./notMNIST_small already present - Skipping extraction of ./notMNIST_small.tar.gz. ['./notMNIST_small/A', './notMNIST_small/B', './notMNIST_small/C', './notMNIST_small/D', './notMNIST_small/E', './notMNIST_small/F', './notMNIST_small/G', './notMNIST_small/H', './notMNIST_small/I', './notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- ###Code from IPython.display import Image sample_images = ['./notMNIST_small/A/MDEtMDEtMDAudHRm.png', './notMNIST_small/B/SVRDR2FyYW1vbmRTdGQtQmQub3Rm.png', './notMNIST_small/C/Q2Fybml2YWwub3Rm.png'] for img in sample_images: display(Image(filename=img)) ###Output _____no_output_____ ###Markdown Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output ./notMNIST_large/A.pickle already present - Skipping pickling. ./notMNIST_large/B.pickle already present - Skipping pickling. ./notMNIST_large/C.pickle already present - Skipping pickling. ./notMNIST_large/D.pickle already present - Skipping pickling. ./notMNIST_large/E.pickle already present - Skipping pickling. ./notMNIST_large/F.pickle already present - Skipping pickling. ./notMNIST_large/G.pickle already present - Skipping pickling. ./notMNIST_large/H.pickle already present - Skipping pickling. ./notMNIST_large/I.pickle already present - Skipping pickling. ./notMNIST_large/J.pickle already present - Skipping pickling. ./notMNIST_small/A.pickle already present - Skipping pickling. ./notMNIST_small/B.pickle already present - Skipping pickling. ./notMNIST_small/C.pickle already present - Skipping pickling. ./notMNIST_small/D.pickle already present - Skipping pickling. ./notMNIST_small/E.pickle already present - Skipping pickling. ./notMNIST_small/F.pickle already present - Skipping pickling. ./notMNIST_small/G.pickle already present - Skipping pickling. ./notMNIST_small/H.pickle already present - Skipping pickling. ./notMNIST_small/I.pickle already present - Skipping pickling. ./notMNIST_small/J.pickle already present - Skipping pickling. ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ###Code print("Test Datasets:") print(test_datasets) pkl_file_a = open(test_datasets[0], 'rb') dataset_a = pickle.load(pkl_file_a) print("\nShape of test dataset for 'A': ", dataset_a.shape) print("\nFirst element of test dataset for 'A': ") plt.imshow(dataset_a[0]) ###Output Test Datasets: ['./notMNIST_small/A.pickle', './notMNIST_small/B.pickle', './notMNIST_small/C.pickle', './notMNIST_small/D.pickle', './notMNIST_small/E.pickle', './notMNIST_small/F.pickle', './notMNIST_small/G.pickle', './notMNIST_small/H.pickle', './notMNIST_small/I.pickle', './notMNIST_small/J.pickle'] Shape of test dataset for 'A': (1872, 28, 28) First element of test dataset for 'A': ###Markdown ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- ###Code print("Verifying Data is balanced accross classes...") for clz, train_pickle in enumerate(train_datasets): pkl_file = open(train_pickle, 'rb') dataset = pickle.load(pkl_file) print(clz, ":", dataset.shape) ###Output Verifying Data is balanced accross classes... 0 : (52909, 28, 28) 1 : (52911, 28, 28) 2 : (52912, 28, 28) 3 : (52911, 28, 28) 4 : (52912, 28, 28) 5 : (52912, 28, 28) 6 : (52912, 28, 28) 7 : (52912, 28, 28) 8 : (52912, 28, 28) 9 : (52911, 28, 28) ###Markdown Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training: (200000, 28, 28) (200000,) Validation: (10000, 28, 28) (10000,) Testing: (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- ###Code print("\nFirst element of shuffled test dataset: ") plt.imshow(test_dataset[0]) ###Output First element of shuffled test dataset: ###Markdown Finally, let's save the data for later reuse: ###Code pickle_file = os.path.join(data_root, 'notMNIST.pickle') try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 690800441 ###Markdown ---Problem 5---------By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.Measure how much overlap there is between training, validation and test samples.Optional questions:- What about near duplicates between datasets? (images that are almost identical)- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.--- ###Code import hashlib pkl_file = open('notMNIST.pickle', 'rb') all_datasets = pickle.load(pkl_file) train_dataset = all_datasets["train_dataset"] valid_dataset = all_datasets["valid_dataset"] test_dataset = all_datasets["test_dataset"] train_hashes = [hashlib.sha1(x).digest() for x in train_dataset] valid_hashes = [hashlib.sha1(x).digest() for x in valid_dataset] test_hashes = [hashlib.sha1(x).digest() for x in test_dataset] valid_in_train = np.in1d(valid_hashes, train_hashes) test_in_train = np.in1d(test_hashes, train_hashes) test_in_valid = np.in1d(test_hashes, valid_hashes) valid_keep = ~valid_in_train test_keep = ~(test_in_train | test_in_valid) valid_dataset_clean = valid_dataset[valid_keep] valid_labels_clean = valid_labels [valid_keep] test_dataset_clean = test_dataset[test_keep] test_labels_clean = test_labels [test_keep] # Cleaning train set as well train_hashes_set = set(train_hashes) train_keep = [] for img_hash in train_hashes: if img_hash in train_hashes_set: train_keep.append(True) train_hashes_set.remove(img_hash) else: train_keep.append(False) train_keep = np.array(train_keep, dtype=bool) train_dataset_clean = train_dataset[train_keep] train_labels_clean = train_labels[train_keep] print("train_dataset_clean: ", len(train_dataset_clean)) print("valid_dataset_clean: ", len(valid_dataset_clean)) print("test_dataset_clean: ", len(test_dataset_clean)) ###Output train_dataset_clean: 187217 valid_dataset_clean: 8933 test_dataset_clean: 8639 ###Markdown ---Problem 6---------Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.Optional question: train an off-the-shelf model on all the data!--- ###Code from sklearn.linear_model import LogisticRegression import time sizes = [50, 100, 1000, 5000] y_valid = np.reshape(valid_dataset, newshape=(len(valid_dataset), image_size*image_size)) y_valid_clean = np.reshape(valid_dataset_clean, newshape=(len(valid_dataset_clean), image_size*image_size)) print("=== LogClassifier results on non-sanitized sets ===") for size in sizes: X = np.reshape(train_dataset[0:size], newshape=(size, image_size*image_size)) log_reg = LogisticRegression(random_state=42) t0 = time.time() log_reg.fit(X, train_labels[0:size]) t1 = time.time() print("train time for size '{}': {:.4f}".format(size, t1-t0)) score = log_reg.score(y_valid, valid_labels) print("score for size '{}': {:.4f}".format(size, score)) print("\n=== LogClassifier results on sanitized sets ===") for size in sizes: X = np.reshape(train_dataset_clean[0:size], newshape=(size, image_size*image_size)) log_reg = LogisticRegression(random_state=42) t0 = time.time() log_reg.fit(X, train_labels_clean[0:size]) t1 = time.time() print("train time for size '{}': {:.4f}".format(size, t1-t0)) score = log_reg.score(y_valid_clean, valid_labels_clean) print("score for size '{}': {:.4f}".format(size, score)) ###Output === LogClassifier results on non-sanitized sets === train time for size '50': 0.0562 score for size '50': 0.4649 train time for size '100': 0.0979 score for size '100': 0.6322 train time for size '1000': 1.9529 score for size '1000': 0.7577 train time for size '5000': 16.6265 score for size '5000': 0.7757 === LogClassifier results on sanitized sets === train time for size '50': 0.0427 score for size '50': 0.4552 train time for size '100': 0.1053 score for size '100': 0.6222 train time for size '1000': 1.9738 score for size '1000': 0.7447 train time for size '5000': 17.3051 score for size '5000': 0.7639 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://yaroslavvb.com/upload/notMNIST/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 def extract(filename): tar = tarfile.open(filename) root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz print('Extracting data for %s. This may take a while. Please wait.' % root) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if d != '.DS_Store'] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = extract(train_filename) test_folders = extract(test_filename) ###Output ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) image_index = 0 print folder for image in os.listdir(folder): image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[image_index, :, :] = image_data image_index += 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') num_images = image_index dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def load(data_folders, min_num_images_per_class): dataset_names = [] for folder in data_folders: dataset = load_letter(folder, min_num_images_per_class) set_filename = folder + '.pickle' try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) dataset_names.append(set_filename) except Exception as e: print('Unable to save data to', pickle_file, ':', e) return dataset_names train_datasets = load(train_folders, 45000) test_datasets = load(test_folders, 1800) ###Output notMNIST_large/A Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52909, 28, 28) Mean: -0.12848 Standard deviation: 0.425576 notMNIST_large/B Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.00755947 Standard deviation: 0.417272 notMNIST_large/C Full dataset tensor: (52912, 28, 28) Mean: -0.142321 Standard deviation: 0.421305 notMNIST_large/D Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.0574553 Standard deviation: 0.434072 notMNIST_large/E Full dataset tensor: (52912, 28, 28) Mean: -0.0701406 Standard deviation: 0.42882 notMNIST_large/F Full dataset tensor: (52912, 28, 28) Mean: -0.125914 Standard deviation: 0.429645 notMNIST_large/G Full dataset tensor: (52912, 28, 28) Mean: -0.0947771 Standard deviation: 0.421674 notMNIST_large/H Full dataset tensor: (52912, 28, 28) Mean: -0.0687667 Standard deviation: 0.430344 notMNIST_large/I Full dataset tensor: (52912, 28, 28) Mean: 0.0307405 Standard deviation: 0.449686 notMNIST_large/J Full dataset tensor: (52911, 28, 28) Mean: -0.153479 Standard deviation: 0.397169 notMNIST_small/A Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.132588 Standard deviation: 0.445923 notMNIST_small/B Full dataset tensor: (1873, 28, 28) Mean: 0.00535619 Standard deviation: 0.457054 notMNIST_small/C Full dataset tensor: (1873, 28, 28) Mean: -0.141489 Standard deviation: 0.441056 notMNIST_small/D Full dataset tensor: (1873, 28, 28) Mean: -0.0492094 Standard deviation: 0.460477 notMNIST_small/E Full dataset tensor: (1873, 28, 28) Mean: -0.0598952 Standard deviation: 0.456146 notMNIST_small/F Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.118148 Standard deviation: 0.451134 notMNIST_small/G Full dataset tensor: (1872, 28, 28) Mean: -0.092519 Standard deviation: 0.448468 notMNIST_small/H Full dataset tensor: (1872, 28, 28) Mean: -0.0586729 Standard deviation: 0.457387 notMNIST_small/I Full dataset tensor: (1872, 28, 28) Mean: 0.0526481 Standard deviation: 0.472657 notMNIST_small/J Full dataset tensor: (1872, 28, 28) Mean: -0.15167 Standard deviation: 0.449521 ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(train_datasets, train_size, valid_size) __, __, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) Testing (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code np.random.seed(133) def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matlotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://commondatastorage.googleapis.com/books1000/' def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) image_index = 0 print(folder) for image in os.listdir(folder): image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[image_index, :, :] = image_data image_index += 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') num_images = image_index dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52909, 28, 28) Mean: -0.12848 Standard deviation: 0.425576 notMNIST_large/B Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.00755947 Standard deviation: 0.417272 notMNIST_large/C Full dataset tensor: (52912, 28, 28) Mean: -0.142321 Standard deviation: 0.421305 notMNIST_large/D Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.0574553 Standard deviation: 0.434072 notMNIST_large/E Full dataset tensor: (52912, 28, 28) Mean: -0.0701406 Standard deviation: 0.42882 notMNIST_large/F Full dataset tensor: (52912, 28, 28) Mean: -0.125914 Standard deviation: 0.429645 notMNIST_large/G Full dataset tensor: (52912, 28, 28) Mean: -0.0947771 Standard deviation: 0.421674 notMNIST_large/H Full dataset tensor: (52912, 28, 28) Mean: -0.0687667 Standard deviation: 0.430344 notMNIST_large/I Full dataset tensor: (52912, 28, 28) Mean: 0.0307405 Standard deviation: 0.449686 notMNIST_large/J Full dataset tensor: (52911, 28, 28) Mean: -0.153479 Standard deviation: 0.397169 notMNIST_small/A Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.132588 Standard deviation: 0.445923 notMNIST_small/B Full dataset tensor: (1873, 28, 28) Mean: 0.00535619 Standard deviation: 0.457054 notMNIST_small/C Full dataset tensor: (1873, 28, 28) Mean: -0.141489 Standard deviation: 0.441056 notMNIST_small/D Full dataset tensor: (1873, 28, 28) Mean: -0.0492094 Standard deviation: 0.460477 notMNIST_small/E Full dataset tensor: (1873, 28, 28) Mean: -0.0598952 Standard deviation: 0.456146 notMNIST_small/F Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.118148 Standard deviation: 0.451134 notMNIST_small/G Full dataset tensor: (1872, 28, 28) Mean: -0.092519 Standard deviation: 0.448468 notMNIST_small/H Full dataset tensor: (1872, 28, 28) Mean: -0.0586729 Standard deviation: 0.457387 notMNIST_small/I Full dataset tensor: (1872, 28, 28) Mean: 0.0526481 Standard deviation: 0.472657 notMNIST_small/J Full dataset tensor: (1872, 28, 28) Mean: -0.15167 Standard deviation: 0.449521 ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) Testing (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matlotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://commondatastorage.googleapis.com/books1000/' last_percent_reported = None def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 1% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) for image_index, image in enumerate(image_files): image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[image_index, :, :] = image_data except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') num_images = image_index + 1 dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52909, 28, 28) Mean: -0.12848 Standard deviation: 0.425576 notMNIST_large/B Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.00755947 Standard deviation: 0.417272 notMNIST_large/C Full dataset tensor: (52912, 28, 28) Mean: -0.142321 Standard deviation: 0.421305 notMNIST_large/D Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.0574553 Standard deviation: 0.434072 notMNIST_large/E Full dataset tensor: (52912, 28, 28) Mean: -0.0701406 Standard deviation: 0.42882 notMNIST_large/F Full dataset tensor: (52912, 28, 28) Mean: -0.125914 Standard deviation: 0.429645 notMNIST_large/G Full dataset tensor: (52912, 28, 28) Mean: -0.0947771 Standard deviation: 0.421674 notMNIST_large/H Full dataset tensor: (52912, 28, 28) Mean: -0.0687667 Standard deviation: 0.430344 notMNIST_large/I Full dataset tensor: (52912, 28, 28) Mean: 0.0307405 Standard deviation: 0.449686 notMNIST_large/J Full dataset tensor: (52911, 28, 28) Mean: -0.153479 Standard deviation: 0.397169 notMNIST_small/A Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.132588 Standard deviation: 0.445923 notMNIST_small/B Full dataset tensor: (1873, 28, 28) Mean: 0.00535619 Standard deviation: 0.457054 notMNIST_small/C Full dataset tensor: (1873, 28, 28) Mean: -0.141489 Standard deviation: 0.441056 notMNIST_small/D Full dataset tensor: (1873, 28, 28) Mean: -0.0492094 Standard deviation: 0.460477 notMNIST_small/E Full dataset tensor: (1873, 28, 28) Mean: -0.0598952 Standard deviation: 0.456146 notMNIST_small/F Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.118148 Standard deviation: 0.451134 notMNIST_small/G Full dataset tensor: (1872, 28, 28) Mean: -0.092519 Standard deviation: 0.448468 notMNIST_small/H Full dataset tensor: (1872, 28, 28) Mean: -0.0586729 Standard deviation: 0.457387 notMNIST_small/I Full dataset tensor: (1872, 28, 28) Mean: 0.0526481 Standard deviation: 0.472657 notMNIST_small/J Full dataset tensor: (1872, 28, 28) Mean: -0.15167 Standard deviation: 0.449521 ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) Testing (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import imageio import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matplotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'https://commondatastorage.googleapis.com/books1000/' last_percent_reported = None data_root = '.' #change me to store data elsewhere def download_progress_hook(count, blockSize, totalSize): ''' A hook to report the progress of the download. This is intended for users with slow internet connections. Reports every 5% change in download progress. ''' global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent %5 == 0: sys.stdout.write("{}%".format(percent)) sys.stdout.flush() else: sys.stdout.write('.') sys.stdout.flush() last_percent_reported = percent def maybe_download (filename, expected_bytes, force=False): ''' Download a file if not present, and make sure it is the right size. ''' dest_filename = os.path.join(data_root, filename) if force or not os.path.exists(dest_filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook) print('\nDownload Complete') statinfo = os.stat(dest_filename) if statinfo.st_size == expected_bytes: print('Found and verified', dest_filename) else: raise Exception( 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?') return dest_filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Attempting to download: notMNIST_large.tar.gz 0%....5%....10%....15%....20%....25%....30%....35%....40%....45%....50%....55%....60%....65%....70%....75%....80%....85%....90%....95%....100% Download Complete Found and verified ./notMNIST_large.tar.gz Attempting to download: notMNIST_small.tar.gz 0%....5%....10%....15%....20%....25%....30%....35%....40%....45%....50%....55%....60%....65%....70%....75%....80%....85%....90%....95%....100% Download Complete Found and verified ./notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labeled A through J. ###Code num_classes = 10 np.random.seed(38) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] #remove .tar.gz if os.path.isdir(root) and not force: # can be overidden by setting force=True print('{} already present - Skipping extraction of ().'.format(root, filename)) else: print('Extracting data for {}. This may take a while. Please wait.'.format(root)) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall(data_root) tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root,d)) ] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.'%(num_classes, len(data_folders))) print(data_folders) return(data_folders) train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output Extracting data for ./notMNIST_large. This may take a while. Please wait. Extracting data for ./notMNIST_small. This may take a while. Please wait. ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (imageio.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except (IOError, ValueError) as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names ###Output _____no_output_____ ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) Testing (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = os.path.join(data_root, 'notMNIST.pickle') try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle print(os.getcwd()) ###Output /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://yaroslavvb.com/upload/notMNIST/' def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify' + filename + '. Can you get to it with a browser?') return filename print(os.getcwd() + '/notMNIST_large.tar.gz') train_filename = maybe_download(os.getcwd() + '/notMNIST_large.tar.gz', 247336696) test_filename = maybe_download(os.getcwd() + '/notMNIST_small.tar.gz', 8458043) ###Output /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) train_filename = os.getcwd() + '/notMNIST_large.tar.gz' test_filename = os.getcwd() + '/notMNIST_small.tar.gz' def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) print('Finish loading %s and %s' % (train_filename, test_filename)) ###Output /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large already present - Skipping extraction of /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large.tar.gz. ['/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/A', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/B', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/C', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/D', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/E', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/F', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/G', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/H', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/I', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/J'] /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small already present - Skipping extraction of /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small.tar.gz. ['/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/A', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/B', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/C', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/D', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/E', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/F', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/G', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/H', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/I', '/Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/J'] Finish loading /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large.tar.gz and /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small.tar.gz ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) image_index = 0 print(folder) for image in os.listdir(folder): image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[image_index, :, :] = image_data image_index += 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') num_images = image_index dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/A.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/B.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/C.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/D.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/E.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/F.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/G.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/H.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/I.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/J.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/A.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/B.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/C.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/D.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/E.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/F.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/G.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/H.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/I.pickle already present - Skipping pickling. /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_small/J.pickle already present - Skipping pickling. ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- ###Code # Problem 2 for dataset in train_datasets: print("load: %s" % dataset) with open(dataset, 'rb') as f: sample = pickle.load(f) print(sample.shape) plt.imshow(sample[0, :, :]) plt.show() ###Output load: /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/A.pickle (52909, 28, 28) load: /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/B.pickle (52911, 28, 28) load: /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/C.pickle (52912, 28, 28) load: /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/D.pickle (52911, 28, 28) load: /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/E.pickle (52912, 28, 28) load: /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/F.pickle (52912, 28, 28) load: /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/G.pickle (52912, 28, 28) load: /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/H.pickle (52912, 28, 28) load: /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/I.pickle (52912, 28, 28) load: /Users/zhaoyiwei/Projects/tensorflow/tensorflow/examples/udacity/notMNIST_large/J.pickle (52911, 28, 28) ###Markdown Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training: (200000, 28, 28) (200000,) Validation: (10000, 28, 28) (10000,) Testing: (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 690800441 ###Markdown ---Problem 5---------By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.Measure how much overlap there is between training, validation and test samples.Optional questions:- What about near duplicates between datasets? (images that are almost identical)- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.--- ###Code # create a feature vector for each image, including mean, devation, square mean... # build a rkd tree based on these features ###Output _____no_output_____ ###Markdown ---Problem 6---------Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.Optional question: train an off-the-shelf model on all the data!--- ###Code from sklearn import datasets, linear_model # train logic regressor using different number of samples and see the result def train_and_test_logreg(train_dataset, train_labels, valid_dataset, valid_labels, num_samples): logreg_n = linear_model.LogisticRegression() d_n = flat_dataset[:num_samples, :] l_n = flat_labels[:num_samples] #print("Train on %d samples" % (num_samples)) logreg_n.fit(d_n, l_n) #print("Test on %d samples" % (num_samples)) pre_labels = logreg_n.predict(valid_dataset) diff_labels = valid_labels - pre_labels correct = sum(1 for i in diff_labels if i == 0) print("[%d] samples, out of %d samples, %d correct, prct %f" % \ (num_samples, len(diff_labels), correct, float(correct) / len(diff_labels))) # prepare the data print('train shape:', train_dataset.shape, train_labels.shape) dataset_shape = train_dataset.shape labels_shape = train_labels.shape flat_dataset = train_dataset.reshape([dataset_shape[0], dataset_shape[1] * dataset_shape[2]]) flat_labels = train_labels print('flattened train shape:', flat_dataset.shape, flat_labels.shape) print('valid shape:', valid_dataset.shape, valid_labels.shape) valid_dataset_shape = valid_dataset.shape flat_valid_dataset = valid_dataset.reshape([valid_dataset_shape[0], valid_dataset_shape[1] * valid_dataset_shape[2]]) flat_valid_labels = valid_labels print('flattened valid shape:', flat_valid_dataset.shape, flat_valid_labels.shape) # try on different size of training train_and_test_logreg(flat_dataset, flat_labels, flat_valid_dataset, flat_valid_labels, 50) train_and_test_logreg(flat_dataset, flat_labels, flat_valid_dataset, flat_valid_labels, 100) train_and_test_logreg(flat_dataset, flat_labels, flat_valid_dataset, flat_valid_labels, 1000) train_and_test_logreg(flat_dataset, flat_labels, flat_valid_dataset, flat_valid_labels, 5000) train_and_test_logreg(flat_dataset, flat_labels, flat_valid_dataset, flat_valid_labels, dataset_shape[0]) ###Output train shape: (200000, 28, 28) (200000,) flattened train shape: (200000, 784) (200000,) valid shape: (10000, 28, 28) (10000,) flattened valid shape: (10000, 784) (10000,) [50] samples, out of 10000 samples, 4649 correct, prct 0.464900 [100] samples, out of 10000 samples, 6322 correct, prct 0.632200 [1000] samples, out of 10000 samples, 7577 correct, prct 0.757700 [5000] samples, out of 10000 samples, 7757 correct, prct 0.775700 [200000] samples, out of 10000 samples, 8243 correct, prct 0.824300 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import imageio import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matplotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'https://commondatastorage.googleapis.com/books1000/' last_percent_reported = None data_root = '.' # Change me to store data elsewhere def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 5% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" dest_filename = os.path.join(data_root, filename) if force or not os.path.exists(dest_filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(dest_filename) if statinfo.st_size == expected_bytes: print('Found and verified', dest_filename) else: raise Exception( 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?') return dest_filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified ./notMNIST_large.tar.gz Found and verified ./notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labeled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall(data_root) tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output ./notMNIST_large already present - Skipping extraction of ./notMNIST_large.tar.gz. ['./notMNIST_large/A', './notMNIST_large/B', './notMNIST_large/C', './notMNIST_large/D', './notMNIST_large/E', './notMNIST_large/F', './notMNIST_large/G', './notMNIST_large/H', './notMNIST_large/I', './notMNIST_large/J'] ./notMNIST_small already present - Skipping extraction of ./notMNIST_small.tar.gz. ['./notMNIST_small/A', './notMNIST_small/B', './notMNIST_small/C', './notMNIST_small/D', './notMNIST_small/E', './notMNIST_small/F', './notMNIST_small/G', './notMNIST_small/H', './notMNIST_small/I', './notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- ###Code # Print the first 10 files for first folder (letter A) for file in os.listdir(train_folders[0])[:10]: fullPath = os.path.join(train_folders[0], file) print(fullPath) display(Image(filename=fullPath)) ###Output ./notMNIST_large/A/VmFkaW0ncyBXcml0aW5nLnR0Zg==.png ###Markdown Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (imageio.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except (IOError, ValueError) as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output ./notMNIST_large/A.pickle already present - Skipping pickling. ./notMNIST_large/B.pickle already present - Skipping pickling. ./notMNIST_large/C.pickle already present - Skipping pickling. ./notMNIST_large/D.pickle already present - Skipping pickling. ./notMNIST_large/E.pickle already present - Skipping pickling. ./notMNIST_large/F.pickle already present - Skipping pickling. ./notMNIST_large/G.pickle already present - Skipping pickling. ./notMNIST_large/H.pickle already present - Skipping pickling. ./notMNIST_large/I.pickle already present - Skipping pickling. ./notMNIST_large/J.pickle already present - Skipping pickling. ./notMNIST_small/A.pickle already present - Skipping pickling. ./notMNIST_small/B.pickle already present - Skipping pickling. ./notMNIST_small/C.pickle already present - Skipping pickling. ./notMNIST_small/D.pickle already present - Skipping pickling. ./notMNIST_small/E.pickle already present - Skipping pickling. ./notMNIST_small/F.pickle already present - Skipping pickling. ./notMNIST_small/G.pickle already present - Skipping pickling. ./notMNIST_small/H.pickle already present - Skipping pickling. ./notMNIST_small/I.pickle already present - Skipping pickling. ./notMNIST_small/J.pickle already present - Skipping pickling. ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ###Code # print first 10 samples from letter a with open(train_datasets[0],'rb') as f: tmp = pickle.load(f) for i in range(10): sample_image = tmp[i,:,:] plt.figure() plt.imshow(sample_image) ###Output _____no_output_____ ###Markdown ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- ###Code for i in range(len(train_datasets)): with open(train_datasets[i],'rb') as f: tmp = pickle.load(f) print('Traning set label', i, 'count:', len(tmp)) for i in range(len(test_datasets)): with open(test_datasets[i],'rb') as f: tmp = pickle.load(f) print('Traning set label', i, 'count:', len(tmp)) ###Output Traning set label 0 count: 52909 Traning set label 1 count: 52911 Traning set label 2 count: 52912 Traning set label 3 count: 52911 Traning set label 4 count: 52912 Traning set label 5 count: 52912 Traning set label 6 count: 52912 Traning set label 7 count: 52912 Traning set label 8 count: 52912 Traning set label 9 count: 52911 Traning set label 0 count: 1872 Traning set label 1 count: 1873 Traning set label 2 count: 1873 Traning set label 3 count: 1873 Traning set label 4 count: 1873 Traning set label 5 count: 1872 Traning set label 6 count: 1872 Traning set label 7 count: 1872 Traning set label 8 count: 1872 Traning set label 9 count: 1872 ###Markdown Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training: (200000, 28, 28) (200000,) Validation: (10000, 28, 28) (10000,) Testing: (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- ###Code print('training set labels:') for i in range(10): sys.stdout.write(str(train_labels[i])+" ") sys.stdout.write('\n') for i in range(10): sample = train_dataset[i,:,:] plt.figure() plt.imshow(sample) print('test set labels:') for i in range(10): sys.stdout.write(str(test_labels[i])+" ") sys.stdout.write('\n') print('validation set labels:') for i in range(10): sys.stdout.write(str(valid_labels[i])+" ") sys.stdout.write('\n') ###Output training set labels: 7 4 5 0 8 7 4 5 5 3 test set labels: 2 6 0 0 4 5 6 3 2 4 validation set labels: 4 5 8 4 9 5 6 7 8 2 ###Markdown Finally, let's save the data for later reuse: ###Code pickle_file = os.path.join(data_root, 'notMNIST.pickle') try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 690800506 ###Markdown ---Problem 5---------By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.Measure how much overlap there is between training, validation and test samples.Optional questions:- What about near duplicates between datasets? (images that are almost identical)- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.--- ###Code #### Build dictionary for training set train_dict = {} for i in range(len(train_dataset)): tmp = train_dataset[i,:,:] tmp_bytes = tmp.data.tobytes() if tmp_bytes not in train_dict: train_dict[tmp_bytes] = i #### Record collision indexes between train vs validation valid_redundant_index = [] for i in range(len(valid_dataset)): tmp = valid_dataset[i,:,:] tmp_bytes = tmp.data.tobytes() if tmp_bytes in train_dict: valid_redundant_index.append(i) #### Build the clean validation set len_new_valid_dataset = len(valid_dataset)-len(valid_redundant_index) new_valid_dataset = np.zeros((len_new_valid_dataset,28,28), dtype=np.float32) new_valid_labels = np.zeros(len_new_valid_dataset, dtype=np.int32) j = 0 for i in range(len(valid_dataset)): if i not in valid_redundant_index: new_valid_dataset[j,:,:] = valid_dataset[i,:,:] new_valid_labels[j] = valid_labels[i] j = j + 1 print('Validation set %d to %d (%.2f%% redundancy)' % (len(valid_dataset),len_new_valid_dataset,100*len(valid_redundant_index)/len(valid_dataset))) #### Record collision indexes between train vs validation test_redundant_index = [] for i in range(len(test_dataset)): tmp = test_dataset[i,:,:] tmp_bytes = tmp.data.tobytes() if tmp_bytes in train_dict: test_redundant_index.append(i) #### Build the clean validation set len_new_test_dataset = len(test_dataset)-len(test_redundant_index) new_test_dataset = np.zeros((len_new_test_dataset,28,28), dtype=np.float32) new_test_labels = np.zeros(len_new_test_dataset, dtype=np.int32) j = 0 for i in range(len(test_dataset)): if i not in test_redundant_index: new_test_dataset[j,:,:] = test_dataset[i,:,:] new_test_labels[j] = test_labels[i] j = j + 1 print('Test set %d to %d (%.2f%% redundancy)' % (len(test_dataset),len_new_test_dataset,100*len(test_redundant_index)/len(test_dataset))) ###Output Validation set 10000 to 8976 (10.24% redundancy) Test set 10000 to 8718 (12.82% redundancy) ###Markdown ---Problem 6---------Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.Optional question: train an off-the-shelf model on all the data!--- ###Code cls = LogisticRegression() small_set = train_dataset[0:5000,:,:] small_set = small_set.reshape(len(small_set),28*28) small_label = train_labels[0:5000] cls.fit(small_set, small_label) print('training accuracy:', cls.score(small_set, small_label)) from sklearn import metrics cv_predicted = cls.predict(new_valid_dataset.reshape(len(new_valid_dataset),28*28)) cv_accuracy = metrics.accuracy_score(new_valid_labels, cv_predicted) print('cv accuracy:', cv_accuracy) test_predicted = cls.predict(new_test_dataset.reshape(len(new_test_dataset),28*28)) test_accuracy = metrics.accuracy_score(new_test_labels, test_predicted) print('test accuracy:', test_accuracy) ###Output cv accuracy: 0.761140819964 test accuracy: 0.839871530167 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import random import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from sklearn.metrics import precision_recall_fscore_support from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://yaroslavvb.com/upload/notMNIST/' def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output notMNIST_large already present - Skipping extraction of notMNIST_large.tar.gz. ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] notMNIST_small already present - Skipping extraction of notMNIST_small.tar.gz. ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- ###Code def display_random_image(): letter = random.choice(['A', 'B', 'C', 'D', 'E', 'F', 'J']) small_or_large = random.choice(['small', 'large']) dir_name = 'notMNIST_%s/%s/' % (small_or_large, letter) images = os.listdir(dir_name) image_name = random.choice(images) filename = '%s%s' % (dir_name, image_name) print(filename) return Image(filename=filename) display_random_image() ###Output notMNIST_large/E/TmV3LVlvcmstRXh0ZW5kZWQgQm9sZC50dGY=.png ###Markdown Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) image_index = 0 print(folder) for image in os.listdir(folder): image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[image_index, :, :] = image_data image_index += 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') num_images = image_index dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A.pickle already present - Skipping pickling. notMNIST_large/B.pickle already present - Skipping pickling. notMNIST_large/C.pickle already present - Skipping pickling. notMNIST_large/D.pickle already present - Skipping pickling. notMNIST_large/E.pickle already present - Skipping pickling. notMNIST_large/F.pickle already present - Skipping pickling. notMNIST_large/G.pickle already present - Skipping pickling. notMNIST_large/H.pickle already present - Skipping pickling. notMNIST_large/I.pickle already present - Skipping pickling. notMNIST_large/J.pickle already present - Skipping pickling. notMNIST_small/A.pickle already present - Skipping pickling. notMNIST_small/B.pickle already present - Skipping pickling. notMNIST_small/C.pickle already present - Skipping pickling. notMNIST_small/D.pickle already present - Skipping pickling. notMNIST_small/E.pickle already present - Skipping pickling. notMNIST_small/F.pickle already present - Skipping pickling. notMNIST_small/G.pickle already present - Skipping pickling. notMNIST_small/H.pickle already present - Skipping pickling. notMNIST_small/I.pickle already present - Skipping pickling. notMNIST_small/J.pickle already present - Skipping pickling. ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ###Code %matplotlib inline def display_random_image(): letter = random.choice(['A', 'B', 'C', 'D', 'E', 'F', 'J']) small_or_large = random.choice(['small', 'large']) filename = 'notMNIST_%s/%s.pickle' % (small_or_large, letter) arr = pickle.load(open(filename, 'r')) rand_image_number = random.randrange(1, arr.shape[0]) plt.imshow(arr[rand_image_number, :, :]) display_random_image() ###Output _____no_output_____ ###Markdown ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- ###Code ## Load all pickles, save number of images pickles = filter(lambda x: 'pickle' in x, os.listdir('./notMNIST_large')) for pickle_file in pickles: small = pickle.load(open(os.path.join('./notMNIST_large', pickle_file), 'r')) print(pickle_file + ' image ' + str(small.shape[0])) for pickle_file in pickles: small = pickle.load(open(os.path.join('./notMNIST_small', pickle_file), 'r')) print(pickle_file + ' image ' + str(small.shape[0])) ###Output A.pickle image 52909 B.pickle image 52911 C.pickle image 52912 D.pickle image 52911 E.pickle image 52912 F.pickle image 52912 G.pickle image 52912 H.pickle image 52912 I.pickle image 52912 J.pickle image 52911 A.pickle image 1872 B.pickle image 1873 C.pickle image 1873 D.pickle image 1873 E.pickle image 1873 F.pickle image 1872 G.pickle image 1872 H.pickle image 1872 I.pickle image 1872 J.pickle image 1872 ###Markdown Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training: (200000, 28, 28) (200000,) Validation: (10000, 28, 28) (10000,) Testing: (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- ###Code def check_image(dataset, label, index): print(string.uppercase[label[index]]) return plt.imshow(dataset[index, :, :]) def check_random_image(dataset, label): rand_image_number = random.randrange(1, dataset.shape[0]) return check_image(dataset, label, rand_image_number) check_image(train_dataset, train_labels, 6) check_random_image(train_dataset, train_labels) ###Output D ###Markdown Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 690800441 ###Markdown ---Problem 5---------By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.Measure how much overlap there is between training, validation and test samples.Optional questions:- What about near duplicates between datasets? (images that are almost identical)- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.--- ###Code import time def check_overlaps(images1, images2): images1.flags.writeable=False images2.flags.writeable=False start = time.clock() hash1 = set([hash(image1.data) for image1 in images1]) hash2 = set([hash(image2.data) for image2 in images2]) all_overlaps = set.intersection(hash1, hash2) return all_overlaps, time.clock()-start r, execTime = check_overlaps(train_dataset, test_dataset) print("# overlaps between training and test sets:", len(r), "execution time:", execTime) r, execTime = check_overlaps(train_dataset, valid_dataset) print("# overlaps between training and validation sets:", len(r), "execution time:", execTime) r, execTime = check_overlaps(valid_dataset, test_dataset) print("# overlaps between validation and test sets:", len(r), "execution time:", execTime) count_duplicates(train_dataset, train_labels, test_dataset, test_labels, valid_dataset, valid_labels) ###Output 0 1 ###Markdown ---Problem 6---------Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.Optional question: train an off-the-shelf model on all the data!--- ###Code X = train_dataset.reshape((train_dataset.shape[0], -1)) X_test = test_dataset.reshape((test_dataset.shape[0], -1)) mod = LogisticRegression() model_indexes = range(0, X.shape[0]) mod.fit(X=X[model_indexes, :], y=train_labels[model_indexes]) predictions = mod.predict(X_test) precision_recall_fscore_support(test_labels, predictions) ###Output _____no_output_____ ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matplotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'https://commondatastorage.googleapis.com/books1000/' last_percent_reported = None data_root = '.' # Change me to store data elsewhere def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 5% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" dest_filename = os.path.join(data_root, filename) if force or not os.path.exists(dest_filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(dest_filename) if statinfo.st_size == expected_bytes: print('Found and verified', dest_filename) else: raise Exception( 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?') return dest_filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labeled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall(data_root) tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52909, 28, 28) Mean: -0.12848 Standard deviation: 0.425576 notMNIST_large/B Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.00755947 Standard deviation: 0.417272 notMNIST_large/C Full dataset tensor: (52912, 28, 28) Mean: -0.142321 Standard deviation: 0.421305 notMNIST_large/D Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.0574553 Standard deviation: 0.434072 notMNIST_large/E Full dataset tensor: (52912, 28, 28) Mean: -0.0701406 Standard deviation: 0.42882 notMNIST_large/F Full dataset tensor: (52912, 28, 28) Mean: -0.125914 Standard deviation: 0.429645 notMNIST_large/G Full dataset tensor: (52912, 28, 28) Mean: -0.0947771 Standard deviation: 0.421674 notMNIST_large/H Full dataset tensor: (52912, 28, 28) Mean: -0.0687667 Standard deviation: 0.430344 notMNIST_large/I Full dataset tensor: (52912, 28, 28) Mean: 0.0307405 Standard deviation: 0.449686 notMNIST_large/J Full dataset tensor: (52911, 28, 28) Mean: -0.153479 Standard deviation: 0.397169 notMNIST_small/A Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.132588 Standard deviation: 0.445923 notMNIST_small/B Full dataset tensor: (1873, 28, 28) Mean: 0.00535619 Standard deviation: 0.457054 notMNIST_small/C Full dataset tensor: (1873, 28, 28) Mean: -0.141489 Standard deviation: 0.441056 notMNIST_small/D Full dataset tensor: (1873, 28, 28) Mean: -0.0492094 Standard deviation: 0.460477 notMNIST_small/E Full dataset tensor: (1873, 28, 28) Mean: -0.0598952 Standard deviation: 0.456146 notMNIST_small/F Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.118148 Standard deviation: 0.451134 notMNIST_small/G Full dataset tensor: (1872, 28, 28) Mean: -0.092519 Standard deviation: 0.448468 notMNIST_small/H Full dataset tensor: (1872, 28, 28) Mean: -0.0586729 Standard deviation: 0.457387 notMNIST_small/I Full dataset tensor: (1872, 28, 28) Mean: 0.0526481 Standard deviation: 0.472657 notMNIST_small/J Full dataset tensor: (1872, 28, 28) Mean: -0.15167 Standard deviation: 0.449521 ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) Testing (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = os.path.join(data_root, 'notMNIST.pickle') try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle from collections import Counter # Config the matlotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://commondatastorage.googleapis.com/books1000/' def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output notMNIST_large already present - Skipping extraction of notMNIST_large.tar.gz. ['notMNIST_large\\A', 'notMNIST_large\\B', 'notMNIST_large\\C', 'notMNIST_large\\D', 'notMNIST_large\\E', 'notMNIST_large\\F', 'notMNIST_large\\G', 'notMNIST_large\\H', 'notMNIST_large\\I', 'notMNIST_large\\J'] notMNIST_small already present - Skipping extraction of notMNIST_small.tar.gz. ['notMNIST_small\\A', 'notMNIST_small\\B', 'notMNIST_small\\C', 'notMNIST_small\\D', 'notMNIST_small\\E', 'notMNIST_small\\F', 'notMNIST_small\\G', 'notMNIST_small\\H', 'notMNIST_small\\I', 'notMNIST_small\\J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) for image_index, image in enumerate(image_files): image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[image_index, :, :] = image_data except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') num_images = image_index + 1 dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) for folder in train_folders: image_files = os.listdir(folder) image_file = os.path.join(folder, image_files[0]) display(Image(filename=image_file)) ###Output notMNIST_large\A.pickle already present - Skipping pickling. notMNIST_large\B.pickle already present - Skipping pickling. notMNIST_large\C.pickle already present - Skipping pickling. notMNIST_large\D.pickle already present - Skipping pickling. notMNIST_large\E.pickle already present - Skipping pickling. notMNIST_large\F.pickle already present - Skipping pickling. notMNIST_large\G.pickle already present - Skipping pickling. notMNIST_large\H.pickle already present - Skipping pickling. notMNIST_large\I.pickle already present - Skipping pickling. notMNIST_large\J.pickle already present - Skipping pickling. notMNIST_small\A.pickle already present - Skipping pickling. notMNIST_small\B.pickle already present - Skipping pickling. notMNIST_small\C.pickle already present - Skipping pickling. notMNIST_small\D.pickle already present - Skipping pickling. notMNIST_small\E.pickle already present - Skipping pickling. notMNIST_small\F.pickle already present - Skipping pickling. notMNIST_small\G.pickle already present - Skipping pickling. notMNIST_small\H.pickle already present - Skipping pickling. notMNIST_small\I.pickle already present - Skipping pickling. notMNIST_small\J.pickle already present - Skipping pickling. ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ###Code with open(train_datasets[0], 'rb') as f: letter_set = pickle.load(f) plt.imshow(letter_set[0]) with open(train_datasets[1], 'rb') as f: letter_set = pickle.load(f) plt.imshow(letter_set[0]) ###Output _____no_output_____ ###Markdown ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): holder = 0 print (label) try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 20000 valid_size = 1000 test_size = 1000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('http://localhost:8888/notebooks/1_notmnist.ipynb#Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) for im, label in zip(train_dataset, train_labels)[:5] + zip(train_dataset, train_labels)[-5:]: plt.figure() plt.imshow(im) ###Output 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 Training: (20000, 28, 28) (20000,) http://localhost:8888/notebooks/1_notmnist.ipynb#Validation: (1000, 28, 28) (1000,) Testing: (1000, 28, 28) (1000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code Counter(train_labels) def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- ###Code plt.imshow(train_dataset[0]) plt.imshow(train_dataset[10000]) ###Output _____no_output_____ ###Markdown Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 69080437 ###Markdown ---Problem 5---------By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.Measure how much overlap there is between training, validation and test samples.Optional questions:- What about near duplicates between datasets? (images that are almost identical)- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.--- ---Problem 6---------Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.Optional question: train an off-the-shelf model on all the data!--- ###Code from sklearn import linear_model from sklearn import pipeline from sklearn import cross_validation reg = LogisticRegression() train_dataset_flat = [np.ndarray.flatten(t) for t in train_dataset] valid_dataset_flat = [np.ndarray.flatten(t) for t in valid_dataset] #cross_validation.cross_val_score( # reg, # train_dataset_flat[:50], train_labels[:50] #) #cross_validation.cross_val_score( # reg, # train_dataset_flat[:100], train_labels[:100] #) #cross_validation.cross_val_score( # reg, # train_dataset_flat[:1000], train_labels[:1000] #) reg.fit(train_dataset_flat, train_labels) from sklearn.metrics import accuracy_score pred = reg.predict(valid_data) score = accuracy_score(pred, valid_labels) ###Output _____no_output_____ ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import imageio import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matplotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'https://commondatastorage.googleapis.com/books1000/' last_percent_reported = None data_root = '.' # Change me to store data elsewhere def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 5% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" dest_filename = os.path.join(data_root, filename) if force or not os.path.exists(dest_filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(dest_filename) if statinfo.st_size == expected_bytes: print('Found and verified', dest_filename) else: raise Exception( 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?') return dest_filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labeled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall(data_root) tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (imageio.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except (IOError, ValueError) as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52909, 28, 28) Mean: -0.12848 Standard deviation: 0.425576 notMNIST_large/B Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.00755947 Standard deviation: 0.417272 notMNIST_large/C Full dataset tensor: (52912, 28, 28) Mean: -0.142321 Standard deviation: 0.421305 notMNIST_large/D Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.0574553 Standard deviation: 0.434072 notMNIST_large/E Full dataset tensor: (52912, 28, 28) Mean: -0.0701406 Standard deviation: 0.42882 notMNIST_large/F Full dataset tensor: (52912, 28, 28) Mean: -0.125914 Standard deviation: 0.429645 notMNIST_large/G Full dataset tensor: (52912, 28, 28) Mean: -0.0947771 Standard deviation: 0.421674 notMNIST_large/H Full dataset tensor: (52912, 28, 28) Mean: -0.0687667 Standard deviation: 0.430344 notMNIST_large/I Full dataset tensor: (52912, 28, 28) Mean: 0.0307405 Standard deviation: 0.449686 notMNIST_large/J Full dataset tensor: (52911, 28, 28) Mean: -0.153479 Standard deviation: 0.397169 notMNIST_small/A Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.132588 Standard deviation: 0.445923 notMNIST_small/B Full dataset tensor: (1873, 28, 28) Mean: 0.00535619 Standard deviation: 0.457054 notMNIST_small/C Full dataset tensor: (1873, 28, 28) Mean: -0.141489 Standard deviation: 0.441056 notMNIST_small/D Full dataset tensor: (1873, 28, 28) Mean: -0.0492094 Standard deviation: 0.460477 notMNIST_small/E Full dataset tensor: (1873, 28, 28) Mean: -0.0598952 Standard deviation: 0.456146 notMNIST_small/F Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.118148 Standard deviation: 0.451134 notMNIST_small/G Full dataset tensor: (1872, 28, 28) Mean: -0.092519 Standard deviation: 0.448468 notMNIST_small/H Full dataset tensor: (1872, 28, 28) Mean: -0.0586729 Standard deviation: 0.457387 notMNIST_small/I Full dataset tensor: (1872, 28, 28) Mean: 0.0526481 Standard deviation: 0.472657 notMNIST_small/J Full dataset tensor: (1872, 28, 28) Mean: -0.15167 Standard deviation: 0.449521 ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) Testing (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = os.path.join(data_root, 'notMNIST.pickle') try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://yaroslavvb.com/upload/notMNIST/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print 'Found and verified', filename else: raise Exception( 'Failed to verify' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 def extract(filename): tar = tarfile.open(filename) root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz print('Extracting data for %s. This may take a while. Please wait.' % root) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if d != '.DS_Store'] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = extract(train_filename) test_folders = extract(test_filename) ###Output ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. The labels will be stored into a separate array of integers 0 through 9.A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load(data_folders, min_num_images, max_num_images): dataset = np.ndarray( shape=(max_num_images, image_size, image_size), dtype=np.float32) labels = np.ndarray(shape=(max_num_images), dtype=np.int32) label_index = 0 image_index = 0 for folder in data_folders: print(folder) for image in os.listdir(folder): if image_index >= max_num_images: raise Exception('More images than expected: %d >= %d' % ( image_index, max_num_images)) image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[image_index, :, :] = image_data labels[image_index] = label_index image_index += 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') label_index += 1 num_images = image_index dataset = dataset[0:num_images, :, :] labels = labels[0:num_images] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % ( num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) print('Labels:', labels.shape) return dataset, labels train_dataset, train_labels = load(train_folders, 450000, 550000) test_dataset, test_labels = load(test_folders, 18000, 20000) ###Output notMNIST_large/A Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping. notMNIST_large/B Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping. notMNIST_large/C notMNIST_large/D Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping. notMNIST_large/E notMNIST_large/F notMNIST_large/G notMNIST_large/H notMNIST_large/I notMNIST_large/J Full dataset tensor: (529114, 28, 28) Mean: -0.0816593 Standard deviation: 0.454232 Labels: (529114,) notMNIST_small/A Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping. notMNIST_small/B notMNIST_small/C notMNIST_small/D notMNIST_small/E notMNIST_small/F Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping. notMNIST_small/G notMNIST_small/H notMNIST_small/I notMNIST_small/J Full dataset tensor: (18724, 28, 28) Mean: -0.0746364 Standard deviation: 0.458622 Labels: (18724,) ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code np.random.seed(133) def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) ###Output _____no_output_____ ###Markdown ---Problem 3---------Convince yourself that the data is still good after shuffling!--- ---Problem 4---------Another check: we expect the data to be balanced across classes. Verify that.--- Prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed.Also create a validation dataset for hyperparameter tuning. ###Code train_size = 200000 valid_size = 10000 valid_dataset = train_dataset[:valid_size,:,:] valid_labels = train_labels[:valid_size] train_dataset = train_dataset[valid_size:valid_size+train_size,:,:] train_labels = train_labels[valid_size:valid_size+train_size] print('Training', train_dataset.shape, train_labels.shape) print('Validation', valid_dataset.shape, valid_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) ###Markdown Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matlotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://commondatastorage.googleapis.com/books1000/' def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output notMNIST_large already present - Skipping extraction of notMNIST_large.tar.gz. ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] notMNIST_small already present - Skipping extraction of notMNIST_small.tar.gz. ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- ###Code from IPython.display import Image, display import os, sys import random image_dir = "notMNIST_large/" for root, dirs, files in os.walk(image_dir): for letter in dirs: directory = os.path.relpath(image_dir + letter) print(directory + "\n") files = os.listdir(directory) sample = random.sample(files, 5) for file in sample: display(Image(filename=directory + '/' + file)) ###Output notMNIST_large/A ###Markdown Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) for image_index, image in enumerate(image_files): image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[image_index, :, :] = image_data except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') num_images = image_index + 1 dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A.pickle already present - Skipping pickling. notMNIST_large/B.pickle already present - Skipping pickling. notMNIST_large/C.pickle already present - Skipping pickling. notMNIST_large/D.pickle already present - Skipping pickling. notMNIST_large/E.pickle already present - Skipping pickling. notMNIST_large/F.pickle already present - Skipping pickling. notMNIST_large/G.pickle already present - Skipping pickling. notMNIST_large/H.pickle already present - Skipping pickling. notMNIST_large/I.pickle already present - Skipping pickling. notMNIST_large/J.pickle already present - Skipping pickling. notMNIST_small/A.pickle already present - Skipping pickling. notMNIST_small/B.pickle already present - Skipping pickling. notMNIST_small/C.pickle already present - Skipping pickling. notMNIST_small/D.pickle already present - Skipping pickling. notMNIST_small/E.pickle already present - Skipping pickling. notMNIST_small/F.pickle already present - Skipping pickling. notMNIST_small/G.pickle already present - Skipping pickling. notMNIST_small/H.pickle already present - Skipping pickling. notMNIST_small/I.pickle already present - Skipping pickling. notMNIST_small/J.pickle already present - Skipping pickling. ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ###Code for dataset in train_datasets: print(dataset) with open(dataset, 'rb') as f: sample = pickle.load(f)[0:5] for img_data in sample: plt.figure(figsize=(1,1)) plt.imshow(img_data) plt.gray() plt.show() ###Output notMNIST_large/A.pickle ###Markdown ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training: (200000, 28, 28) (200000,) Validation: (10000, 28, 28) (10000,) Testing: (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- ###Code sample = train_dataset[0:10] for img_data in sample: plt.figure(figsize=(1,1)) plt.imshow(img_data) plt.gray() plt.show() ###Output _____no_output_____ ###Markdown Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 690800503 ###Markdown ---Problem 5---------By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.Measure how much overlap there is between training, validation and test samples.Optional questions:- What about near duplicates between datasets? (images that are almost identical)- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.--- ###Code from scipy.linalg import norm from scipy import sum, average def show_img(img): plt.figure(figsize=(1,1)) plt.imshow(img) plt.gray() plt.show() def normalize(arr): rng = arr.max() - arr.min() amin = arr.min() return (arr - amin) * 1 / rng def compare_images(img1, img2): # noralize to compensate for exposure difference, this may be unnecessary # consider disabling it #img1 = normalize(img1) #img2 = normalize(img2) # calculate the difference and its norms diff = img1 - img2 # elementwise for arrays m_norm = sum(abs(diff)) # Manhattan norm #z_norm = norm(diff.ravel(), 0) # Zero norm return (m_norm)#, z_norm) def show_comparison(img1, img2, n_m): show_img(img1) show_img(img2) print("Manhattan norm:", n_m, "/ per pixel:", n_m/img1.size) #print("Zero norm:", n_0, "/ per pixel:", n_0*1.0/img1.size) def contains_similar_image(dataset, img): for img1 in dataset: n_m = compare_images(img, img1) if n_m < 0.5: s = "." print(s, end='') return True return False with open('notMNIST.pickle', 'rb') as f: data_sets = pickle.load(f) train_dataset = data_sets['train_dataset'] train_labels = data_sets['train_labels'] valid_dataset = data_sets['valid_dataset'] valid_labels = data_sets['valid_labels'] test_dataset = data_sets['test_dataset'] test_labels = data_sets['test_labels'] n_m = compare_images(train_dataset[0], train_dataset[1]) show_comparison(train_dataset[0], train_dataset[1], n_m) # Takes way too long valid_ind = [i for i, v in enumerate(valid_dataset) if not contains_similar_image(train_dataset, v)] valid_dataset = valid_dataset[valid_ind] valid_labels = valid_labels[valid_ind] print('validation set is ready') test_ind = [i for i, v in enumerate(test_dataset) if not contains_similar_image(train_dataset, v)] test_dataset = test_dataset[test_ind] test_labels = test_labels[test_ind] print('testing set is ready') print('Training:', train_dataset.shape) print('Validation sanitized:', valid_dataset.shape, valid_labels.shape) print('Testing sanitized:', test_dataset.shape, test_labels.shape) pickle_file = 'notMNIST_sanitized.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output _____no_output_____ ###Markdown ---Problem 6---------Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.Optional question: train an off-the-shelf model on all the data!--- ###Code from sklearn import metrics def train_model(X_train, X_test, y_train, y_test): model = LogisticRegression() model.fit(X_train, y_train) predicted = model.predict(X_test) print(metrics.accuracy_score(y_test, predicted)) with open('notMNIST.pickle', 'rb') as f: data_sets = pickle.load(f) train_dataset = data_sets['train_dataset'] train_labels = data_sets['train_labels'] valid_dataset = data_sets['valid_dataset'] valid_labels = data_sets['valid_labels'] test_dataset = data_sets['test_dataset'] test_labels = data_sets['test_labels'] nsamples, nx, ny = train_dataset.shape train_dataset = train_dataset.reshape((nsamples, nx*ny)) nsamples, nx, ny = test_dataset.shape test_dataset = test_dataset.reshape((nsamples, nx*ny)) train_model(train_dataset[0:5000], test_dataset, train_labels[0:5000], test_labels) ###Output 0.8517 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matlotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://commondatastorage.googleapis.com/books1000/' def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output notMNIST_large already present - Skipping extraction of notMNIST_large.tar.gz. ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] notMNIST_small already present - Skipping extraction of notMNIST_small.tar.gz. ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- ###Code from IPython.display import display from IPython.display import Image i = Image(filename='notMNIST_large/A/ZXRjaHkudHRm.png') display(i) ###Output _____no_output_____ ###Markdown Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) for image_index, image in enumerate(image_files): image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[image_index, :, :] = image_data except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') num_images = image_index + 1 dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A.pickle already present - Skipping pickling. notMNIST_large/B.pickle already present - Skipping pickling. notMNIST_large/C.pickle already present - Skipping pickling. notMNIST_large/D.pickle already present - Skipping pickling. notMNIST_large/E.pickle already present - Skipping pickling. notMNIST_large/F.pickle already present - Skipping pickling. notMNIST_large/G.pickle already present - Skipping pickling. notMNIST_large/H.pickle already present - Skipping pickling. notMNIST_large/I.pickle already present - Skipping pickling. notMNIST_large/J.pickle already present - Skipping pickling. notMNIST_small/A.pickle already present - Skipping pickling. notMNIST_small/B.pickle already present - Skipping pickling. notMNIST_small/C.pickle already present - Skipping pickling. notMNIST_small/D.pickle already present - Skipping pickling. notMNIST_small/E.pickle already present - Skipping pickling. notMNIST_small/F.pickle already present - Skipping pickling. notMNIST_small/G.pickle already present - Skipping pickling. notMNIST_small/H.pickle already present - Skipping pickling. notMNIST_small/I.pickle already present - Skipping pickling. notMNIST_small/J.pickle already present - Skipping pickling. ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ###Code import matplotlib.pyplot as plt img_A = pickle.load( open('notMNIST_large/A.pickle', 'rb') ) plt.imshow(img_A[2]) plt.gray() plt.show() print(len(img_A)) ###Output _____no_output_____ ###Markdown ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- ###Code for class_data in train_datasets: print(class_data) img_class = pickle.load( open(class_data, 'rb') ) print(len(img_class)) ###Output notMNIST_large/A.pickle 52912 notMNIST_large/B.pickle 52912 notMNIST_large/C.pickle 52912 notMNIST_large/D.pickle 52912 notMNIST_large/E.pickle 52912 notMNIST_large/F.pickle 52912 notMNIST_large/G.pickle 52912 notMNIST_large/H.pickle 52912 notMNIST_large/I.pickle 52912 notMNIST_large/J.pickle 52911 ###Markdown Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training: (200000, 28, 28) (200000,) Validation: (10000, 28, 28) (10000,) Testing: (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- ###Code print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training: (200000, 28, 28) (200000,) Validation: (10000, 28, 28) (10000,) Testing: (10000, 28, 28) (10000,) ###Markdown Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 690800441 ###Markdown ---Problem 5---------By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.Measure how much overlap there is between training, validation and test samples.Optional questions:- What about near duplicates between datasets? (images that are almost identical)- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.--- ---Problem 6---------Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.Optional question: train an off-the-shelf model on all the data!--- ###Code from sklearn.metrics import confusion_matrix, classification_report ntrain = -1 X, y = train_dataset[:ntrain].reshape(-1, train_dataset.shape[1]*train_dataset.shape[2]), train_labels[:ntrain] logistic = LogisticRegression(multi_class="multinomial", solver="lbfgs") logistic.fit(X,y) print("Predictionis...") n_val = 10000 X_val, y_val = valid_dataset[:n_val].reshape(-1, valid_dataset.shape[1]*valid_dataset.shape[2]), valid_labels[:n_val] y_pred = logistic.predict(X_val) print("Confusion matrix: ") plt.pcolor(confusion_matrix(y_pred, y_val), cmap='Blues') labels = [chr(k) for k in range(ord("A"), ord("J")+1)] print('Score: ', classification_report(y_pred, y_val, target_names=labels)) ###Output Predictionis... Confusion matrix: Score: precision recall f1-score support A 0.82 0.85 0.83 956 B 0.83 0.85 0.84 972 C 0.87 0.82 0.85 1055 D 0.86 0.85 0.85 1009 E 0.78 0.86 0.82 899 F 0.86 0.86 0.86 1004 G 0.79 0.80 0.79 985 H 0.82 0.81 0.82 1019 I 0.79 0.75 0.77 1057 J 0.85 0.82 0.83 1044 avg / total 0.83 0.83 0.83 10000 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matlotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://commondatastorage.googleapis.com/books1000/' last_percent_reported = None def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 1% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52909, 28, 28) Mean: -0.12848 Standard deviation: 0.425576 notMNIST_large/B Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.00755947 Standard deviation: 0.417272 notMNIST_large/C Full dataset tensor: (52912, 28, 28) Mean: -0.142321 Standard deviation: 0.421305 notMNIST_large/D Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.0574553 Standard deviation: 0.434072 notMNIST_large/E Full dataset tensor: (52912, 28, 28) Mean: -0.0701406 Standard deviation: 0.42882 notMNIST_large/F Full dataset tensor: (52912, 28, 28) Mean: -0.125914 Standard deviation: 0.429645 notMNIST_large/G Full dataset tensor: (52912, 28, 28) Mean: -0.0947771 Standard deviation: 0.421674 notMNIST_large/H Full dataset tensor: (52912, 28, 28) Mean: -0.0687667 Standard deviation: 0.430344 notMNIST_large/I Full dataset tensor: (52912, 28, 28) Mean: 0.0307405 Standard deviation: 0.449686 notMNIST_large/J Full dataset tensor: (52911, 28, 28) Mean: -0.153479 Standard deviation: 0.397169 notMNIST_small/A Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.132588 Standard deviation: 0.445923 notMNIST_small/B Full dataset tensor: (1873, 28, 28) Mean: 0.00535619 Standard deviation: 0.457054 notMNIST_small/C Full dataset tensor: (1873, 28, 28) Mean: -0.141489 Standard deviation: 0.441056 notMNIST_small/D Full dataset tensor: (1873, 28, 28) Mean: -0.0492094 Standard deviation: 0.460477 notMNIST_small/E Full dataset tensor: (1873, 28, 28) Mean: -0.0598952 Standard deviation: 0.456146 notMNIST_small/F Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.118148 Standard deviation: 0.451134 notMNIST_small/G Full dataset tensor: (1872, 28, 28) Mean: -0.092519 Standard deviation: 0.448468 notMNIST_small/H Full dataset tensor: (1872, 28, 28) Mean: -0.0586729 Standard deviation: 0.457387 notMNIST_small/I Full dataset tensor: (1872, 28, 28) Mean: 0.0526481 Standard deviation: 0.472657 notMNIST_small/J Full dataset tensor: (1872, 28, 28) Mean: -0.15167 Standard deviation: 0.449521 ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) Testing (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://commondatastorage.googleapis.com/books1000/' def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) image_index = 0 print(folder) for image in os.listdir(folder): image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[image_index, :, :] = image_data image_index += 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') num_images = image_index dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52909, 28, 28) Mean: -0.12848 Standard deviation: 0.425576 notMNIST_large/B Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.00755947 Standard deviation: 0.417272 notMNIST_large/C Full dataset tensor: (52912, 28, 28) Mean: -0.142321 Standard deviation: 0.421305 notMNIST_large/D Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.0574553 Standard deviation: 0.434072 notMNIST_large/E Full dataset tensor: (52912, 28, 28) Mean: -0.0701406 Standard deviation: 0.42882 notMNIST_large/F Full dataset tensor: (52912, 28, 28) Mean: -0.125914 Standard deviation: 0.429645 notMNIST_large/G Full dataset tensor: (52912, 28, 28) Mean: -0.0947771 Standard deviation: 0.421674 notMNIST_large/H Full dataset tensor: (52912, 28, 28) Mean: -0.0687667 Standard deviation: 0.430344 notMNIST_large/I Full dataset tensor: (52912, 28, 28) Mean: 0.0307405 Standard deviation: 0.449686 notMNIST_large/J Full dataset tensor: (52911, 28, 28) Mean: -0.153479 Standard deviation: 0.397169 notMNIST_small/A Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.132588 Standard deviation: 0.445923 notMNIST_small/B Full dataset tensor: (1873, 28, 28) Mean: 0.00535619 Standard deviation: 0.457054 notMNIST_small/C Full dataset tensor: (1873, 28, 28) Mean: -0.141489 Standard deviation: 0.441056 notMNIST_small/D Full dataset tensor: (1873, 28, 28) Mean: -0.0492094 Standard deviation: 0.460477 notMNIST_small/E Full dataset tensor: (1873, 28, 28) Mean: -0.0598952 Standard deviation: 0.456146 notMNIST_small/F Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.118148 Standard deviation: 0.451134 notMNIST_small/G Full dataset tensor: (1872, 28, 28) Mean: -0.092519 Standard deviation: 0.448468 notMNIST_small/H Full dataset tensor: (1872, 28, 28) Mean: -0.0586729 Standard deviation: 0.457387 notMNIST_small/I Full dataset tensor: (1872, 28, 28) Mean: 0.0526481 Standard deviation: 0.472657 notMNIST_small/J Full dataset tensor: (1872, 28, 28) Mean: -0.15167 Standard deviation: 0.449521 ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) Testing (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801 ###Markdown Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from scipy import ndimage from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matplotlib backend as plotting inline in IPython %matplotlib inline ###Output _____no_output_____ ###Markdown First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. ###Code url = 'http://commondatastorage.googleapis.com/books1000/' last_percent_reported = None def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 1% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) ###Output Found and verified notMNIST_large.tar.gz Found and verified notMNIST_small.tar.gz ###Markdown Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J. ###Code num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) ###Output ['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J'] ['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J'] ###Markdown ---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.--- Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. ###Code image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) ###Output notMNIST_large/A Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping. Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52909, 28, 28) Mean: -0.12848 Standard deviation: 0.425576 notMNIST_large/B Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.00755947 Standard deviation: 0.417272 notMNIST_large/C Full dataset tensor: (52912, 28, 28) Mean: -0.142321 Standard deviation: 0.421305 notMNIST_large/D Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (52911, 28, 28) Mean: -0.0574553 Standard deviation: 0.434072 notMNIST_large/E Full dataset tensor: (52912, 28, 28) Mean: -0.0701406 Standard deviation: 0.42882 notMNIST_large/F Full dataset tensor: (52912, 28, 28) Mean: -0.125914 Standard deviation: 0.429645 notMNIST_large/G Full dataset tensor: (52912, 28, 28) Mean: -0.0947771 Standard deviation: 0.421674 notMNIST_large/H Full dataset tensor: (52912, 28, 28) Mean: -0.0687667 Standard deviation: 0.430344 notMNIST_large/I Full dataset tensor: (52912, 28, 28) Mean: 0.0307405 Standard deviation: 0.449686 notMNIST_large/J Full dataset tensor: (52911, 28, 28) Mean: -0.153479 Standard deviation: 0.397169 notMNIST_small/A Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.132588 Standard deviation: 0.445923 notMNIST_small/B Full dataset tensor: (1873, 28, 28) Mean: 0.00535619 Standard deviation: 0.457054 notMNIST_small/C Full dataset tensor: (1873, 28, 28) Mean: -0.141489 Standard deviation: 0.441056 notMNIST_small/D Full dataset tensor: (1873, 28, 28) Mean: -0.0492094 Standard deviation: 0.460477 notMNIST_small/E Full dataset tensor: (1873, 28, 28) Mean: -0.0598952 Standard deviation: 0.456146 notMNIST_small/F Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping. Full dataset tensor: (1872, 28, 28) Mean: -0.118148 Standard deviation: 0.451134 notMNIST_small/G Full dataset tensor: (1872, 28, 28) Mean: -0.092519 Standard deviation: 0.448468 notMNIST_small/H Full dataset tensor: (1872, 28, 28) Mean: -0.0586729 Standard deviation: 0.457387 notMNIST_small/I Full dataset tensor: (1872, 28, 28) Mean: 0.0526481 Standard deviation: 0.472657 notMNIST_small/J Full dataset tensor: (1872, 28, 28) Mean: -0.15167 Standard deviation: 0.449521 ###Markdown ---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.--- ---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.--- Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning. ###Code def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) ###Output Training (200000, 28, 28) (200000,) Validation (10000, 28, 28) (10000,) Testing (10000, 28, 28) (10000,) ###Markdown Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. ###Code def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) ###Output _____no_output_____ ###Markdown ---Problem 4---------Convince yourself that the data is still good after shuffling!--- Finally, let's save the data for later reuse: ###Code pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) ###Output Compressed pickle size: 718193801
.ipynb_checkpoints/airbnb_bcn-checkpoint.ipynb
###Markdown Barcelona Airbnb 2021 Section 1: Business UnderstandingThis project intends to evaluate the effect of the Covid19 pandemic on [Airbnb hosting prices](http://insideairbnb.com/get-the-data.html) in the city of Barcelona. We use the number of reviews as a metric for the number of tourists in the city and the number of positive cases for the prevalence of [Covid19](https://cnecovid.isciii.es/covid19/documentaci%C3%B3n-y-datos) in the region of Catalonia. Question 1: How was tourism affected by the Covid19 outbreak during the last months? Question 2: How did the price evolved for both visited and non-visited listings? Question 3: Which are the main differences between visited and non-visited listings? Question 4: Which neighbourhoods had more visitors?Sources: [Inside Airbnb](http://insideairbnb.com/get-the-data.html), [Covid19](https://cnecovid.isciii.es/covid19/documentaci%C3%B3n-y-datos) ###Code import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt import seaborn as sns import datetime from sklearn.preprocessing import OrdinalEncoder from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler from sklearn import metrics from sklearn.ensemble import RandomForestClassifier %matplotlib inline #Fetching data. From the Airbnb source, we create a dataset dictionary with an entry for each month lsts = {'aug': pd.read_csv('./listings_aug20.csv'), 'sept': pd.read_csv('./listings_sept20.csv'), 'oct': pd.read_csv('./listings_oct20.csv'), 'nov': pd.read_csv('./listings_nov20.csv'), 'dec': pd.read_csv('./listings_nov20.csv'), 'jan': pd.read_csv('./listings_nov20.csv'), 'feb': pd.read_csv('./listings_nov20.csv'), 'mar': pd.read_csv('./listings_nov20.csv'), 'apr': pd.read_csv('./listings_nov20.csv') } ###Output In /home/jorge/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: The text.latex.preview rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In /home/jorge/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: The mathtext.fallback_to_cm rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In /home/jorge/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: Support for setting the 'mathtext.fallback_to_cm' rcParam is deprecated since 3.3 and will be removed two minor releases later; use 'mathtext.fallback : 'cm' instead. In /home/jorge/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: The validate_bool_maybe_none function was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In /home/jorge/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: The savefig.jpeg_quality rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In /home/jorge/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: The keymap.all_axes rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In /home/jorge/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: The animation.avconv_path rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. In /home/jorge/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: The animation.avconv_args rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later. ###Markdown Data ExplorationFor the data exploration, only one dataset will be used since all of them share the same columns and have similar number of listings. Therefore, all of them will be processed equally ###Code # Columns of the datasets lsts['sept'].columns # Size of each dataset for month in lsts: n_listings = lsts[month].shape[0] print(f'There are {n_listings} listings in {month}') lsts['sept'].describe() ###Output _____no_output_____ ###Markdown Preprocessing Data ###Code # We set the identifier as the index for every dataframe for month in lsts: lsts[month].set_index('id', inplace = True) # Some listings have more beds than people that they can accommodate. # This might be due to hosts who did not filled the information properly sum(lsts['sept']['beds']-lsts['sept']['accommodates']>0) # To solve the bed/accomm paradox, the number of occupants is the maximum of both for key in lsts.keys(): lsts[key]['capacity'] = lsts[key].apply(lambda x: max(x['beds'], x['accommodates']), axis = 1) # Column 'price' is string type, whose format is '$1,000.00'. We store the int value in 'price_numeric' # We also are interested in the price per guest, as it might be more relevant for key in lsts.keys(): lsts[key]['price_numeric'] = lsts[key].apply(lambda x: int(x['price'][1:-3].replace(',','')), axis = 1) lsts[key]['price_person'] = lsts[key].apply(lambda x: x['price_numeric']/x['capacity'], axis = 1) # The price distribution shows a massive variance. A small percentage of listings have an extremely high price compared to the others prices_feb = lsts['feb']['price_person'].sort_values().copy() prices_feb = prices_feb.reset_index()['price_person'] plt.plot(prices_feb) # To avoid misleading results, these listings will be trimmed. Only the 98 percentile will remain for key in lsts: cutoff = lsts[key]['price_person'].quantile(q=0.98) lsts[key] = lsts[key][lsts[key]['price_person']<cutoff] mean_price = [lsts[key]['price_person'].mean() for key in lsts.keys()] mean_occu = [lsts[key]['number_of_reviews_l30d'].mean() for key in lsts.keys()] ###Output _____no_output_____ ###Markdown Data Cleansing for modeling ###Code # We create a blank dataframe which will be filled with some columns from the original dataframe df = pd.DataFrame() # Our target variable for the model is a boolean variable with value 1 if the listing had any review during the last month and 0 if not df['had_reviews']=lsts['aug']['number_of_reviews_l30d'].apply(lambda x: 0 if x==0 else 1) # Filter numerical features from the DataFrame numerical_columns = lsts['aug'].columns[(lsts['aug'].dtypes==int)|(lsts['aug'].dtypes==float)] # Drop target feature numerical_columns = numerical_columns.drop(['number_of_reviews_l30d']) # Drop potential leaking values numerical_columns = numerical_columns.drop(['availability_30','availability_60','availability_90','availability_365']) # Drop non-relevant features numerical_columns = numerical_columns.drop(['scrape_id', 'host_id']) # Insert into the new DataFrame df[numerical_columns] = lsts['aug'][numerical_columns] numerical_columns lsts['aug'].columns[lsts['aug'].dtypes==object] # let's save only those columns that might be useful bool_columns = ['host_is_superhost', 'host_has_profile_pic', 'host_identity_verified', 'has_availability', 'instant_bookable'] # and change its value to binary df[bool_columns]=lsts['aug'][bool_columns].replace({'t': 1, 'f': 0}) # Some variables need to be ordinally encoded df['host_acceptance_rate'] = lsts['aug']['host_acceptance_rate'].dropna().apply(lambda x: int(x[:-1])) tiers = {'within an hour': 1, 'within a few hours': 2, 'within a day': 3, 'a few days or more': 4} df['host_response_time']=lsts['aug']['host_response_time'].replace(tiers) # We transform temporal variables into int type date_columns = ['last_scraped', 'host_since', 'calendar_last_scraped', 'first_review'] df[date_columns] = lsts['aug'][date_columns].dropna().applymap(lambda x: datetime.datetime.strptime(x, '%Y-%m-%d').toordinal()) # Some variables need to be one-hot encoded neig_columns = ['neighbourhood_cleansed','neighbourhood_group_cleansed'] dummies = pd.get_dummies(lsts['aug'][neig_columns], dummy_na=True) df[dummies.columns] = dummies sns.heatmap(df.isna()) #both 'calendar_updates' and 'bathrooms' features are NaN df.dropna(axis=1, how='all', inplace=True) df.dropna(axis=0, how='all', inplace=True) # we also remove any column with more than a 78% of missing values df.dropna(how='all', thresh = 0.78*df.shape[1], inplace=True) # Some values of number of beds are missing # We calculate the ratio beds/bedroom, and use it for approximating the number of bedrooms ratio_beds_bedrooms = (df['beds']/df['bedrooms']).mean() df.loc[df['bedrooms'].isna(), 'bedrooms'] = df['beds'][df['bedrooms'].isna()]/ratio_beds_bedrooms # For the rest of the values, we simply substitute it with the mode na_columns = df.columns[df.isna().sum()>0] for column in na_columns: df[column].fillna(df[column].mode()[0], inplace=True) df.columns[df.isna().sum()>0] #check there isn't any NaN ###Output _____no_output_____ ###Markdown Model training ###Code X = df.drop(columns=['had_reviews']) y = df['had_reviews'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42) scaler = MinMaxScaler() X_train_scaled = scaler.fit_transform(X_train) X_test_scaled = scaler.transform(X_test) forest = RandomForestClassifier() forest.fit(X_train, y_train) y_pred = forest.predict(X_test) metrics.confusion_matrix(y_test, y_pred) ###Output _____no_output_____ ###Markdown Question 1: How was tourism affected by the Covid19 outbreak during the last months?The daily record of Covid19 infections in every region of Spain can be found [here](https://cnecovid.isciii.es/covid19/documentaci%C3%B3n-y-datos). ###Code # Dates when each dataset was created dates = {'aug': datetime.datetime(2020,8,24), 'sept': datetime.datetime(2020,9,12), 'oct': datetime.datetime(2020,10,12), 'nov': datetime.datetime(2020,11,6), 'dec': datetime.datetime(2020,12,16), 'jan': datetime.datetime(2021,1,12), 'feb': datetime.datetime(2021,2,9), 'mar': datetime.datetime(2021,3,5), 'apr': datetime.datetime(2021,4,12) } date_series = pd.Series(data=dates) cases_total = pd.read_csv('casos_tecnica_ccaa.csv') #covid cases in Spain cases_total.set_index('ccaa_iso', inplace = True) cases_cat = cases_total.loc['CT',['fecha','num_casos']] #covid cases in Catalonia cases_cat['fecha'] = [datetime.datetime.strptime(date,"%Y-%m-%d" ) for date in cases_cat['fecha']] cases_cat = cases_cat[(cases_cat['fecha']>=dates['aug']-datetime.timedelta(30))&(cases_cat['fecha']<=dates['apr'])] mean_num_reviews = [lsts[key]['number_of_reviews_l30d'].mean() for key in lsts] mean_num_reviews y = mean_num_reviews fig, ax1 = plt.subplots() ax1.plot_date(date_series-datetime.timedelta(15), y, xdate=True, ls='-', color = 'tab:orange', label = 'Reviews') ax1.set_xlabel('date') ax1.set_ylabel('Nº reviews') ax1.set_title('Nº reviews and Covid19 cases') ax1.xaxis.set_tick_params(rotation = 90) ax1.set_ylim(ymin=0, ymax=1.5*np.nanmax(y)) ax2 = ax1.twinx() ax2.plot_date(cases_cat['fecha'], cases_cat['num_casos'], '-', xdate=True, linewidth=0.6, label='Covid' ) ax2.set_ylabel('nº of cases (uds)') ax1.legend() plt.tight_layout() ###Output _____no_output_____ ###Markdown Question 2: How did the price evolved for both visited and non-visited listings? ###Code visited_price = [] empty_price = [] for key in lsts: visited_price.append(lsts[key].loc[lsts[key]['number_of_reviews_l30d']>0,'price_person'].mean()) empty_price.append(lsts[key].loc[lsts[key]['number_of_reviews_l30d']==0,'price_person'].mean()) y = visited_price fig, ax1 = plt.subplots() ax1.plot_date(date_series-datetime.timedelta(15), visited_price, xdate=True, ls='-', color = 'tab:orange', label = 'visited') ax1.plot_date(date_series-datetime.timedelta(15), empty_price, xdate=True, ls='-', color = 'tab:green', label = 'empty') ax1.set_xlabel('date') ax1.set_ylabel('No. reviews') ax1.set_title('No. reviews and Covid19 cases') ax1.xaxis.set_tick_params(rotation = 90) ax1.set_ylim(ymin=0, ymax=1.3*np.nanmax(empty_price)) ax2 = ax1.twinx() ax2.plot_date(cases_cat['fecha'], cases_cat['num_casos'], '-', xdate=True, linewidth=0.6, label='Covid' ) ax2.set_ylabel('No. of cases') ax1.legend(loc=7) plt.tight_layout() ###Output _____no_output_____ ###Markdown Question 3: Which are the main differences between visited and non-visited listings? ###Code df_comparison = df.groupby('had_reviews').mean() df_comparison.index.name = None df_comparison = df_comparison.T df_comparison['Relative difference'] = df_comparison.apply(lambda x: 200*(x[1]-x[0])/(x[0]+x[1]), axis = 1) features = ['price_person', 'capacity', 'number_of_reviews', 'reviews_per_month', 'host_listings_count', 'host_is_superhost','host_acceptance_rate', 'instant_bookable', 'host_identity_verified'] plot_df = df_comparison.loc[features] plot_df.sort_values(by='Relative difference', inplace=True) colors = plot_df['Relative difference'].apply(lambda x: 'powderblue' if x>=0 else 'salmon') plt.barh(plot_df.index, width=plot_df['Relative difference'], color=colors) plt.show() ###Output _____no_output_____ ###Markdown Question 4: Which neighbourhoods had more visitors? ###Code # neig_reviews = {key: lsts[key].groupby(by='neighbourhood_group_cleansed')['number_of_reviews_l30d'].mean() for key in lsts} neig_reviews = pd.DataFrame(neig_reviews) fig, ax = plt.subplots() median = neig_reviews.mean(axis=1).median() colors = ['powderblue' if neig >= median else 'salmon' for neig in neig_reviews.mean(axis=1)] ax.bar(neig_reviews.index, neig_reviews.mean(axis = 1), color=colors) ax.xaxis.set_tick_params(rotation = -60) ###Output _____no_output_____
assignments/A0/A0_Q2.ipynb
###Markdown Q2This question will walk you through the process of how a coding assignment in JupyterHub will work. In the following parts, you'll edit the sections of code needed to pass the autograder.For each part below, you'll notice there's a cell with one line of Python, starting with the word `raise`. **Delete this line**, and in its place, put the code that answers the prompt.Below that cell, you'll find another cell with multiple lines of Python, starting with `try:`. **DO NOT EDIT THIS CELL.** This is the autograder. For future reference, JupyterHub knows what the autograder code is before you download the assignment, so even if you edit the autograder so your code passes its tests, the edits will be removed when you submit the assignment. So you can't cheat that way :)**HINT:** Every time you fill in code to answer a question, you can immediately test that code by selecting the cell and pressing the "Run Cell" button in the menu (looks like a Play icon). After you run the cell with your code, repeat this process with the autograder cell(s) below it. Provided there aren't any errors, your code worked correctly! If there are errors, go back and edit your code and repeat this process. Part ACreate a variable `x`, and set it equal to the value `3`. ###Code assert x == 3 ###Output _____no_output_____ ###Markdown Part BReset (or redefine) the variable `x` to have the value `3.14159`. ###Code assert x == 3.14159 ###Output _____no_output_____ ###Markdown Part CCreate a new variable `y`, and set it to the value `10`. ###Code assert y == 10 ###Output _____no_output_____ ###Markdown Part DCreate a new variable `z`, and set it to be the product of the variables `x` and `y`. ###Code assert z == 31.4159 ###Output _____no_output_____ ###Markdown Part ECreate a new variable `z_squared`, and set it to be the value of `z` raised the power 2. ###Code assert z_squared == 986.95877281 ###Output _____no_output_____ ###Markdown Question 2This question will walk you through the process of how a coding assignment in JupyterHub will work. In the following parts, you'll edit the sections of code needed to pass the autograder.For each part below, you'll notice there's a cell with one line of Python, starting with the word `raise`. **Delete this line**, and in its place, put the code that answers the prompt.Below that cell, you'll find another cell with multiple lines of Python, starting with `try:`. **DO NOT EDIT THIS CELL.** This is the autograder. For future reference, JupyterHub knows what the autograder code is before you download the assignment, so even if you edit the autograder so your code passes its tests, the edits will be removed when you submit the assignment. So you can't cheat that way :)**HINT:** Every time you fill in code to answer a question, you can immediately test that code by selecting the cell and pressing the "Run Cell" button in the menu (looks like a Play icon). After you run the cell with your code, repeat this process with the autograder cell(s) below it. Provided there aren't any errors, your code worked correctly! If there are errors, go back and edit your code and repeat this process. Part ACreate a variable `x`, and *assign* it to have the value `3`. ###Code assert x == 3 ###Output _____no_output_____ ###Markdown Part BReset (or *re-assign*) the variable `x` to have the value `3.14159`. ###Code assert x == 3.14159 ###Output _____no_output_____ ###Markdown Part CCreate a new variable `y`, and assign it to have the value `10`. ###Code assert y == 10 ###Output _____no_output_____ ###Markdown Part DCreate a new variable `z`, and assign it to have the product of the variables `x` and `y`. ###Code assert z == 31.4159 ###Output _____no_output_____ ###Markdown Part ECreate a new variable `z_squared`, and assign it to have the value of `z` raised the power 2. ###Code assert z_squared == 986.95877281 ###Output _____no_output_____
mnist_encoder_kmeans.ipynb
###Markdown Use Encoding to Improve K-means on the MNIST Dataset ###Code n_rows = n_cols = 28 n_clusters = 10 ###Output _____no_output_____ ###Markdown Visualize Results & Calculate a Purity ScoreDisplay a sorted confusion matrix to identify if the correct number was clustered most of the timeThis will be used in the baseline and with encoding ###Code import seaborn as sn import numpy as np import matplotlib.pyplot as plt def print_results(c, y): y_train_to_clustered = np.dstack([y, c])[0] clustered_tallies = np.zeros((n_clusters, n_clusters), dtype=int) for i in range(0, len(y_train_to_clustered)): clustered_tallies[y_train_to_clustered[i][1]][y_train_to_clustered[i][0]] += 1 cluster_to_num_map = list(map(lambda x: np.argmax(x), clustered_tallies)) clustered_tallies = sorted(clustered_tallies, key=lambda e: np.argmax(e)) fig, ax = plt.subplots(1, figsize=(15,15)) p = sn.heatmap(clustered_tallies, annot=True, fmt="d", annot_kws={"size": 10}, cmap='coolwarm', ax=ax, square=True, yticklabels=cluster_to_num_map) plt.xlabel('Actual', fontsize=18) plt.ylabel('Cluster', fontsize=18) p.tick_params(length=0) p.xaxis.tick_top() p.xaxis.set_label_position('top') plt.title('Cluster match count for each number', fontsize= 30) # purity - sum of correct in each class divided by the total number of images purity_sums = np.zeros((10, 1)) for i in range(0, len(y_train_to_clustered[:])): if cluster_to_num_map[y_train_to_clustered[i][1]] == y[i]: purity_sums[cluster_to_num_map[y_train_to_clustered[i][0]]] += 1 print('Purity ', np.add.reduce(purity_sums)[0] / len(y)) ###Output _____no_output_____ ###Markdown Generate a K-Means Baseline ###Code from keras.datasets import mnist from sklearn.cluster import KMeans (x_org, y_train), (x_test_org, y_test) = mnist.load_data() x_train = x_org.reshape((x_org.shape[0], -1)) x_test = x_test_org.reshape(x_test_org.shape[0], n_rows, n_cols, 1) input_shape = (n_rows, n_cols, 1) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 k_base = KMeans(n_clusters=n_clusters) k_base.fit(x_train) base_clustered = k_base.predict(x_train) print_results(base_clustered, y_train) ###Output Purity 0.59085 ###Markdown Create Auto EncoderUse Keras to create an image autoencoder consisting of an encoder and decoderThe autoencoder takes an image, reduces it to n dimensions and recreates it ###Code import keras from keras import Model from keras.datasets import mnist from keras.layers import Input, Dense, Dropout, Flatten, Conv2D, MaxPooling2D, Reshape, BatchNormalization from keras import activations from keras.layers.advanced_activations import LeakyReLU import numpy as np n_dims = 14 # the more dimensions the more information is retained def get_encoder(x): x = Conv2D(16, (3, 3), activation=activations.relu, padding='same', name='conv2d16')(x) x = Flatten(name='flatten')(x) x = Dense(784, activation=activations.relu, name='dense1')(x) x = Dense(392, activation=activations.relu, name='dense2')(x) x = Dense(196, activation=activations.relu, name='dense3')(x) x = Dense(n_dims, activation=activations.relu, name='denseDim')(x) return x def get_decoder(x): x = Dense(196, activation=activations.relu)(x) x = Dense(392, activation=activations.relu)(x) x = Dense(784, activation='sigmoid')(x) x = Reshape((28,28,1))(x) return x x = Input(shape=(n_rows, n_cols, 1), name='input') encoder = get_encoder(x) decoder = get_decoder(encoder) autoencoder = Model(x, decoder) autoencoder.compile(loss=keras.losses.binary_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) autoencoder.summary() (x_org, y_train), (x_test_org, y_test) = mnist.load_data() x_train = x_org.reshape(x_org.shape[0], n_rows, n_cols, 1) x_test = x_test_org.reshape(x_test_org.shape[0], n_rows, n_cols, 1) input_shape = (n_rows, n_cols, 1) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 # Train Model cbs = [keras.callbacks.EarlyStopping(patience=15, monitor='val_loss'), keras.callbacks.ModelCheckpoint(filepath='auto_encoder_weights.h5', save_best_only=True)] training_results = autoencoder.fit(x_train, x_train, batch_size=100, epochs=300, verbose=1, validation_data=(x_test, x_test), callbacks=cbs) print('complete') ###Output _____no_output_____ ###Markdown View Reconstructed Images to Confirm Auto Encoder is Working ###Code import matplotlib.pyplot as plt from matplotlib.pyplot import imshow autoencoder.load_weights('auto_encoder_weights.h5') n_images = 10 auto_encoded = autoencoder.predict(x_test[:n_images]) auto_encoded = (auto_encoded * 255).astype('int32') auto_encoded = auto_encoded.reshape(auto_encoded.shape[0], n_rows, n_cols) for i in range(0, n_images): fig=plt.figure(figsize=(10, 10)) fig.add_subplot(4, 4, 1) imshow(x_test_org[i]) fig.add_subplot(4, 4, 2) imshow(auto_encoded[i]) ###Output _____no_output_____ ###Markdown Create Encoder ###Code encoder = get_encoder(x) encoder = Model(x, encoder) encoder.compile(loss=keras.losses.binary_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) # load the weightds created by the auto encoder encoder.load_weights('auto_encoder_weights.h5', by_name=True) encoder.summary() encoded = encoder.predict(x_train) ###Output _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input (InputLayer) (None, 28, 28, 1) 0 _________________________________________________________________ conv2d16 (Conv2D) (None, 28, 28, 16) 160 _________________________________________________________________ flatten (Flatten) (None, 12544) 0 _________________________________________________________________ dense1 (Dense) (None, 784) 9835280 _________________________________________________________________ dense2 (Dense) (None, 392) 307720 _________________________________________________________________ dense3 (Dense) (None, 196) 77028 _________________________________________________________________ denseDim (Dense) (None, 14) 2758 ================================================================= Total params: 10,222,946 Trainable params: 10,222,946 Non-trainable params: 0 _________________________________________________________________ ###Markdown Plot Encoded DataThe chart will be reflect the data when n_dim = 3To obtain accurate clustering more than 3 dimensions of information is needed ###Code import matplotlib.pyplot as plt plt.figure(figsize=(10,10)) x = encoded[:,0] y = encoded[:,1] z = encoded[:,2] ax = plt.axes(projection='3d') # limit the amout of plotted data to make the chart easier to view min = 32000 max = min+10000 ax.scatter(x[min:max], y[min:max], z[min:max], c=y_train[min:max], cmap='viridis') ###Output _____no_output_____ ###Markdown Train K-Means with Encoded Data ###Code from sklearn.cluster import KMeans k_encoded = KMeans(n_clusters=n_clusters) k_encoded.fit(encoded) centers = k_encoded.cluster_centers_ plt.figure(figsize=(10,10)) x = centers[:,0] y = centers[:,1] z = centers[:,2] ax = plt.axes(projection='3d') ax.scatter(x, y, z, c=y, cmap='viridis', linewidth=3); ###Output _____no_output_____ ###Markdown Find the Purity of K-Means PredictionsUse the distance of of training data to centers to match target labels to clusters ###Code encoded_clustered = k_encoded.predict(encoded) print_results(encoded_clustered, y_train) ###Output Purity 0.65535
docs_src/data_block.ipynb
###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); if they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output [`MultiCategoryList`](/data_block.htmlMultiCategoryList). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file is returned in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perform on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How is the output above generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space (as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows us to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column (default to the third column of the dataframe). The examples put in the validation set correspond to the indices with `True` value in that column. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); if they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output [`MultiCategoryList`](/data_block.htmlMultiCategoryList). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store a list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the different unique labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a training set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and puts them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To be more precise, this function returns a list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels).The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of labels. ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing the floats in items for regression. Will add a `log` if this flag is `True`. Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single classificatio problem. ###Code show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single multi-classificatio problem. ###Code show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use the `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(ItemLists.label_from_lists) show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(ItemList.get) show_doc(CategoryList.new) show_doc(LabelLists.get_processors) show_doc(LabelList.set_item) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(LabelLists.process) show_doc(ItemLists.transform) show_doc(LabelList.process) show_doc(LabelList.transform) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(ItemList.get_label_cls) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(LabelList.transform_y) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) show_doc(ItemList.from_df) path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) show_doc(ItemList.from_df) path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) show_doc(ItemList.from_df) path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation set? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? For each of those questions, you can have multiple possible blocks: your inputs might be in a folder, a csv file, a dataframe. You may want to split them randomly, by certain indexes or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may or may not have data augmentation to deal with. Or a test set. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing you total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin by our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. There is also a test set containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works, it can also be used for text or tabular data. With ouy sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the column 'label' of our csv. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and the labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation set? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? For each of those questions, you can have multiple possible blocks: your inputs might be in a folder, a csv file, a dataframe. You may want to split them randomly, by certain indexes or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may or may not have data augmentation to deal with. Or a test set. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing you total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin by our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. There is also a test set containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works, it can also be used for text or tabular data. With ouy sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the column 'label' of our csv. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and the labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, here is what you should code:```class MyCustomItemList(): If you need custom arguments you will have to overwrite __init__ and new like this. def __init__(self, items:Iterator, my_args, **kwargs): super().__init__(items, **kwargs) store my args, initialize what is needed. def new(self, items:Iterator, **kwargs)->'NumericalizedTextList': Retrive your custom args stored and send them to new like this return super().new(items=items, my_args, **kwargs) This is how to get your data stored at index i def get(self, i): o = super().get(i) return what you need from o```You can add custom splitting or labelling methods if you need them. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). ###Code show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of labels. ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing the floats in items for regression. Will add a `log` if this flag is `True`. Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single classificatio problem. ###Code show_doc(MultiCategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single multi-classificatio problem. Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.from_lists) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(ItemLists.label_from_lists) show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(ItemList.get) show_doc(CategoryList.new) show_doc(LabelLists.get_processors) show_doc(LabelList.from_lists) show_doc(LabelList.set_item) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(LabelList.clear_item) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(LabelLists.process) show_doc(ItemLists.transform) show_doc(LabelList.process) show_doc(LabelList.transform) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(MultiCategoryProcessor.generate_classes) show_doc(CategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) show_doc(ItemList.from_df) path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file is returned in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perform on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How is the output above generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space (as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows us to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column (default to the third column of the dataframe). The examples put in the validation set correspond to the indices with `True` value in that column. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); if they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output [`MultiCategoryList`](/data_block.htmlMultiCategoryList). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store a list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the different unique labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a training set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and puts them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To be more precise, this function returns a list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.vision import * from fastai import * ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly:- where are the inputs- how to label them- how to split the data into a training and validation set- what type of [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) to create- possible transforms to apply- how to warp in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)This is a bit longer than using the factory methods but is way more flexible. As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in fodlers following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. With the data block API, the same thing is achieved like this: ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) data = (ImageFileList.from_folder(path) #Where to find the data? -> in path and its subfolders .label_from_folder() #How to label? -> depending on the folder of the filenames .split_by_folder() #How to split in train/valid? -> use the folders .add_test_folder() #Optionally add a test set .datasets(ImageClassificationDataset) #How to convert to datasets? -> use ImageClassificationDataset .transform(tfms, size=224) #Data augmetnation? -> use tfms with a size of 224 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.test_ds[0] data.show_batch(rows=3, figsize=(5,5)) data.valid_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageFileList.from_folder(planet) #Where to find the data? -> in planet and its subfolders .label_from_csv('labels.csv', sep=' ', folder='train', suffix='.jpg') #How to label? -> use the csv file labels.csv in path, #add .jpg to the names and take them in the folder train .random_split_by_pct() #How to split in train/valid? -> randomly with the defulat 20% in valid .datasets(ImageMultiDataset) #How to convert to datasets? -> use ImageMultiDataset .transform(planet_tfms, size=128) #Data augmetnation? -> use tfms with a size of 128 .databunch()) #Finally? -> use the defaults for conversion to databunch data.show_batch(rows=3, figsize=(10,8), is_train=False) ###Output _____no_output_____ ###Markdown This new API also allows to use datasets type for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (ImageFileList.from_folder(path_img) #Where are the input files? -> in path_img .label_from_func(get_y_fn) #How to label? -> use get_y_fn .random_split_by_pct() #How to split between train and valid? -> randomly .datasets(SegmentationDataset, classes=codes) #How to create a dataset? -> use SegmentationDataset .transform(get_transforms(), size=96, tfm_y=True) #Data aug -> Use standard tfms with tfm_y=True .databunch(bs=64)) #Lastly convert in a databunch. data.show_batch(rows=2, figsize=(5,5)) ###Output _____no_output_____ ###Markdown One last example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = {img:bb for img, bb in zip(images, lbl_bbox)} get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ImageFileList.from_folder(coco) #Where are the images? -> in coco .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .datasets(ObjectDetectDataset) #How to create datasets? -> with ObjectDetectDataset #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=3, is_train=False, figsize=(8,7)) ###Output _____no_output_____ ###Markdown Provide inputs The inputs we want to feed our model are regrouped in the following class. The class contains methods to get the corresponding labels. ###Code show_doc(InputList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) ###Code show_doc(InputList.from_folder) ###Output _____no_output_____ ###Markdown Note that [`InputList`](/data_block.htmlInputList) is subclassed in vision by [`ImageFileList`](/vision.data.htmlImageFileList) that changes the default of `extensions` to image file extensions (which is why we used [`ImageFileList`](/vision.data.htmlImageFileList) in our previous examples). Labelling the inputs All the followings are methods of [`InputList`](/data_block.htmlInputList). Note that some of them are primarly intended for inputs that are filenames and might not work in general situations. ###Code show_doc(InputList.label_from_csv) ###Output _____no_output_____ ###Markdown If a `folder` is specified, filenames are taken in `self.path/folder`. `suffix` is added. If `sep` is specified, splits the values in `label_col` accordingly. This method is intended for inputs that are filenames. ###Code jekyll_note("This method will only keep the filenames that are both present in the csv file and in `self.items`.") show_doc(InputList.label_from_df) jekyll_note("This method will only keep the filenames that are both present in the dataframe and in `self.items`.") show_doc(InputList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(InputList.label_from_func) ###Output _____no_output_____ ###Markdown This method is primarly intended for inputs that are filenames, but could work in other settings. ###Code show_doc(InputList.label_from_re) show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown A list of labelled inputs in `items` (expected to be tuples of input, label) with a `path` attribute. This class contains methods to create `SplitDataset`. Split the data between train and validation. The following functions are methods of [`LabelList`](/data_block.htmlLabelList), to create a [`SplitData`](/data_block.htmlSplitData) in different ways. ###Code show_doc(LabelList.random_split_by_pct) show_doc(LabelList.split_by_files) show_doc(LabelList.split_by_fname_file) show_doc(LabelList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(LabelList.split_by_idx) show_doc(SplitData, title_level=3) ###Output _____no_output_____ ###Markdown You won't normally construct a [`SplitData`](/data_block.htmlSplitData) yourself, but instead will use one of the `split*` methods in [`LabelList`](/data_block.htmlLabelList). ###Code show_doc(SplitData.datasets) show_doc(SplitData.add_test) show_doc(SplitData.add_test_folder) ###Output _____no_output_____ ###Markdown Create datasets To create the datasets from [`SplitData`](/data_block.htmlSplitData) we have the following class method. ###Code show_doc(SplitData.datasets) show_doc(SplitDatasets, title_level=3) ###Output _____no_output_____ ###Markdown This class can be constructed directly from one of the following factory methods. ###Code show_doc(SplitDatasets.from_single) show_doc(SplitDatasets.single_from_c) show_doc(SplitDatasets.single_from_classes) ###Output _____no_output_____ ###Markdown Then we can build the [`DataLoader`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.DataLoader) around our [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) like this. ###Code show_doc(SplitDatasets.dataloaders) ###Output _____no_output_____ ###Markdown The methods `img_transform` and `img_databunch` used earlier are documented in [`vision.data`](/vision.data.htmlvision.data). Utility classes ###Code show_doc(ItemList, title_level=3) show_doc(PathItemList, title_level=3) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.vision import * from fastai import * ###Output _____no_output_____ ###Markdown The data block API leps you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly:- where are the inputs- how to label them- how to split the data into a training and validation set- what type of [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) to create- possible transforms to apply- how to warp in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)This is a bit longer than using the factory methods but is way more flexible. As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code path = untar_data(URLs.MNIST_SAMPLE) tfms = get_transforms(do_flip=False) data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in fodlers following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. With the data block API, the same thing is achieved like this: ###Code path = untar_data(URLs.MNIST_SAMPLE) tfms = get_transforms(do_flip=False) data = (ImageFileList.from_folder(path) #Where to find the data? -> in path and its subfolders .label_from_folder() #How to label? -> depending on the folder of the filenames .split_by_folder() #How to split in train/valid? -> use the folders .datasets(ImageClassificationDataset) #How to convert to datasets? -> use ImageClassificationDataset .transform(tfms, size=224) #Data augmetnation? -> use tfms with a size of 224 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(rows=3, figsize=(5,5)) data.valid_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageFileList.from_folder(planet) #Where to find the data? -> in planet and its subfolders .label_from_csv('labels.csv', sep=' ', folder='train', suffix='.jpg') #How to label? -> use the csv file labels.csv in path, #add .jpg to the names and take them in the folder train .random_split_by_pct() #How to split in train/valid? -> randomly with the defulat 20% in valid .datasets(ImageMultiDataset) #How to convert to datasets? -> use ImageMultiDataset .transform(planet_tfms, size=128) #Data augmetnation? -> use tfms with a size of 128 .databunch()) #Finally? -> use the defaults for conversion to databunch data.show_batch(rows=3, figsize=(10,10), is_train=False) ###Output _____no_output_____ ###Markdown This new API also allows to use datasets type for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, the road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. The new thing is that we use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as the ones applied to the image. ###Code data = (ImageFileList.from_folder(path_img) #Where are the input files? -> in path_img .label_from_func(get_y_fn) #How to label? -> use get_y_fn .random_split_by_pct() #How to split between train and valid? -> randomly .datasets(SegmentationDataset, classes=codes) #How to create a dataset? -> use SegmentationDataset .transform(get_transforms(), size=96, tfm_y=True) #Data aug -> Use standard tfms with tfm_y=True .databunch(bs=64)) #Lastly convert in a databunch. data.show_batch(rows=2, figsize=(5,5)) ###Output _____no_output_____ ###Markdown One last example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = {img:bb for img, bb in zip(images, lbl_bbox)} get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown Then it's very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ImageFileList.from_folder(coco) #Where are the images? -> in coco .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .datasets(ObjectDetectDataset) #How to create datasets? -> with ObjectDetectDataset #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=3, is_train=False, figsize=(8,7)) ###Output _____no_output_____ ###Markdown Give the inputs The inputs we want to feed our model are regrouped in the following class. It contains methods to then attribute labels to them. ###Code show_doc(InputList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) ###Code show_doc(InputList.from_folder) ###Output _____no_output_____ ###Markdown Note that [`InputList`](/data_block.htmlInputList) is subclassed in vision by [`ImageFileList`](/vision.data.htmlImageFileList) that changes the default of `extensions` to image file extensions (which is why we used [`ImageFileList`](/vision.data.htmlImageFileList) in our previous examples). Labelling the inputs All the followings are methods of [`InputList`](/data_block.htmlInputList). Note that some of them are primarly intended for inputs that are filenames and might not work in general situations. ###Code show_doc(InputList.label_from_csv) jekyll_note("This method will only keep the filenames that are both present in the csv file and in `self.items`.") show_doc(InputList.label_from_df) jekyll_note("This method will only keep the filenames that are both present in the dataframe and in `self.items`.") show_doc(InputList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(InputList.label_from_func) show_doc(InputList.label_from_re) show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown A list of labelled inputs in `items` (expected to be tuples of input, label) with a `path` attribute. This class contains methods to create `SplitDataset`. In future development, it will contain factory methods to directly create a [`LabelList`](/data_block.htmlLabelList) from a source of labelled data (a csv file or a dataframe with inputs and labels) for instance. Split the data between train and validation. The following functions are methods of [`LabelList`](/data_block.htmlLabelList), to create a [`SplitData`](/data_block.htmlSplitData) in different ways. ###Code show_doc(LabelList.random_split_by_pct) show_doc(LabelList.split_by_files) show_doc(LabelList.split_by_fname_file) show_doc(LabelList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(LabelList.split_by_idx) show_doc(SplitData, title_level=3) ###Output _____no_output_____ ###Markdown Create datasets To create the datasets from [`SplitData`](/data_block.htmlSplitData) we have the following class method. ###Code show_doc(SplitData.datasets) show_doc(SplitDatasets, title_level=3) ###Output _____no_output_____ ###Markdown This class can be constructed directly from one of the following factory methods. ###Code show_doc(SplitDatasets.from_single) show_doc(SplitDatasets.single_from_c) show_doc(SplitDatasets.single_from_classes) ###Output _____no_output_____ ###Markdown Then we can build the [`DataLoader`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.DataLoader) around our [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) like this. ###Code show_doc(SplitDatasets.dataloaders) ###Output _____no_output_____ ###Markdown The methods `img_transform` and `img_databunch` used earlier are documented in [`vision.data`](/vision.data.htmlvision.data). Utility classes ###Code show_doc(ItemList, title_level=3) show_doc(PathItemList, title_level=3) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); if they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output [`MultiCategoryList`](/data_block.htmlMultiCategoryList). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) show_doc(ItemList.from_df) path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation set? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? For each of those questions, you can have multiple possible blocks: your inputs might be in a folder, a csv file, a dataframe. You may want to split them randomly, by certain indexes or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may or may not have data augmentation to deal with. Or a test set. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing you total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin by our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. There is also a test set containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works, it can also be used for text or tabular data. With ouy sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the column 'label' of our csv. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and the labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - `CollabList` for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default `PreProcessor` with the `_processor` class variableIf this isn't the case and you really need to write your own class, here is what you should code:```class MyCustomItemList(): If you need custom arguments you will have to overwrite __init__ and new like this. def __init__(self, items:Iterator, my_args, **kwargs): super().__init__(items, **kwargs) store my args, initialize what is needed. def new(self, items:Iterator, **kwargs)->'NumericalizedTextList': Retrive your custom args stored and send them to new like this return super().new(items=items, my_args, **kwargs) This is how to get your data stored at index i def get(self, i): o = super().get(i) return what you need from o```You can add custom splitting or labelling methods if you need them. ###Code show_doc(ItemList.predict) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). ###Code show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of labels. ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing the floats in items for regression. Will add a `log` if this flag is `True`. Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of `PreProcessor` classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single classificatio problem. ###Code show_doc(MultiCategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single multi-classificatio problem. Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.from_lists) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(ItemLists.label_from_lists) show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(ItemList.get) show_doc(CategoryList.new) show_doc(LabelLists.get_processors) show_doc(LabelList.from_lists) show_doc(LabelList.set_item) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(LabelList.clear_item) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(LabelLists.process) show_doc(CategoryList.predict) show_doc(ItemLists.transform) show_doc(LabelList.process) show_doc(LabelList.transform) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(MultiCategoryProcessor.generate_classes) show_doc(CategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels).The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of labels. ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing the floats in items for regression. Will add a `log` if this flag is `True`. Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single classificatio problem. ###Code show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single multi-classificatio problem. ###Code show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use the `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(ItemLists.label_from_lists) show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(ItemList.get) show_doc(CategoryList.new) show_doc(LabelLists.get_processors) show_doc(LabelList.set_item) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(LabelLists.process) show_doc(ItemLists.transform) show_doc(LabelList.process) show_doc(LabelList.transform) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(ItemList.get_label_cls) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(LabelList.transform_y) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation set? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? For each of those questions, you can have multiple possible blocks: your inputs might be in a folder, a csv file, a dataframe. You may want to split them randomly, by certain indexes or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may or may not have data augmentation to deal with. Or a test set. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing you total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin by our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. There is also a test set containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works, it can also be used for text or tabular data. With ouy sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the column 'label' of our csv. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and the labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels).The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of labels. ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing the floats in items for regression. Will add a `log` if this flag is `True`. Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single classificatio problem. ###Code show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single multi-classificatio problem. ###Code show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.from_lists) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(ItemLists.label_from_lists) show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(ItemList.get) show_doc(CategoryList.new) show_doc(LabelLists.get_processors) show_doc(LabelList.from_lists) show_doc(LabelList.set_item) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(LabelLists.process) show_doc(ItemLists.transform) show_doc(LabelList.process) show_doc(LabelList.transform) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(ItemList.get_label_cls) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(LabelList.transform_y) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output You can deactivate this warning by passing `no_check=True`. ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) show_doc(ItemList.from_df) path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) ###Output _____no_output_____ ###Markdown Let's try this functions for cifar-10. In every folder, we have our image files that are named as 0001.png, 0002.png. So we have 10 images of 0001.png (one for each class) in the train folder. `valid_names` can be a list containing names of your images that you want to place in your validation set. __Note__ :- Here in `valid_names` you need to specify the image name and not the image path. So for `/path/to/image.png`, we only need to add `image.png` in our valid_names. ###Code path = untar_data(URLs.CIFAR) path.ls() data = (ImageList.from_folder(path) .split_by_files(valid_names=['0001.png', '0002.png'])) data ###Output _____no_output_____ ###Markdown In the Valid we can see 40 images (20 images of 0001.png, 20 images of 0002.png from both train and valid folders) ###Code show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside `get_files`, there is `_get_files` which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown Without `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels).The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of labels. ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing the floats in items for regression. Will add a `log` if this flag is `True`. Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single classificatio problem. ###Code show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single multi-classificatio problem. ###Code show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(ItemLists.label_from_lists) show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(ItemList.get) show_doc(CategoryList.new) show_doc(LabelLists.get_processors) show_doc(LabelList.set_item) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(LabelLists.process) show_doc(ItemLists.transform) show_doc(LabelList.process) show_doc(LabelList.transform) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(ItemList.get_label_cls) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(LabelList.transform_y) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.vision import * from fastai import * ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly:- where are the inputs- how to label them- how to split the data into a training and validation set- what type of [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) to create- possible transforms to apply- how to warp in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)This is a bit longer than using the factory methods but is way more flexible. As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in fodlers following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. With the data block API, the same thing is achieved like this: ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() data = (ImageFileList.from_folder(path) #Where to find the data? -> in path and its subfolders .label_from_folder() #How to label? -> depending on the folder of the filenames .split_by_folder() #How to split in train/valid? -> use the folders .add_test_folder() #Optionally add a test set .datasets() #How to convert to datasets? .transform(tfms, size=224) #Data augmentation? -> use tfms with a size of 224 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.train_ds[0] data.show_batch(rows=3, figsize=(5,5)) data.valid_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageFileList.from_folder(planet) #Where to find the data? -> in planet and its subfolders .label_from_csv('labels.csv', sep=' ', folder='train', suffix='.jpg') #How to label? -> use the csv file labels.csv in path, #add .jpg to the names and take them in the folder train .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .datasets() #How to convert to datasets? -> use ImageMultiDataset .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally? -> use the defaults for conversion to databunch data.show_batch(rows=3, figsize=(10,8)) ###Output _____no_output_____ ###Markdown The data block API also allows you to use dataset types for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (ImageFileList.from_folder(path_img) #Where are the input files? -> in path_img .label_from_func(get_y_fn) #How to label? -> use get_y_fn .random_split_by_pct() #How to split between train and valid? -> randomly .datasets(SegmentationDataset, classes=codes) #How to create a dataset? -> use SegmentationDataset .transform(get_transforms(), size=96, tfm_y=True) #Data aug -> Use standard tfms with tfm_y=True .databunch(bs=64)) #Lastly convert in a databunch. data.show_batch(rows=2, figsize=(5,5)) ###Output _____no_output_____ ###Markdown One last example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = {img:bb for img, bb in zip(images, lbl_bbox)} get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ImageFileList.from_folder(coco) #Where are the images? -> in coco .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .datasets(ObjectDetectDataset) #How to create datasets? -> with ObjectDetectDataset #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=3, ds_type=DatasetType.Valid, figsize=(8,7)) ###Output _____no_output_____ ###Markdown Provide inputs The inputs we want to feed our model are regrouped in the following class. The class contains methods to get the corresponding labels. ###Code show_doc(InputList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) ###Code show_doc(InputList.from_folder) ###Output _____no_output_____ ###Markdown Note that [`InputList`](/data_block.htmlInputList) is subclassed in vision by [`ImageFileList`](/vision.data.htmlImageFileList) that changes the default of `extensions` to image file extensions (which is why we used [`ImageFileList`](/vision.data.htmlImageFileList) in our previous examples). Labelling the inputs All the followings are methods of [`InputList`](/data_block.htmlInputList). Note that some of them are primarly intended for inputs that are filenames and might not work in general situations. ###Code show_doc(InputList.label_const) show_doc(InputList.label_from_csv) ###Output _____no_output_____ ###Markdown If a `folder` is specified, filenames are taken in `self.path/folder`. `suffix` is added. If `sep` is specified, splits the values in `label_col` accordingly. This method is intended for inputs that are filenames. ###Code jekyll_note("This method will only keep the filenames that are both present in the csv file and in `self.items`.") show_doc(InputList.label_from_df) jekyll_note("This method will only keep the filenames that are both present in the dataframe and in `self.items`.") show_doc(InputList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(InputList.label_from_func) ###Output _____no_output_____ ###Markdown This method is primarly intended for inputs that are filenames, but could work in other settings. ###Code show_doc(InputList.label_from_re) show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown A list of labelled inputs in `items` (expected to be tuples of input, label) with a `path` attribute. This class contains methods to create `SplitDataset`. ###Code show_doc(LabelList.from_csv) show_doc(LabelList.from_csvs) show_doc(LabelList.from_df) ###Output _____no_output_____ ###Markdown Split the data between train and validation. The following functions are methods of [`LabelList`](/data_block.htmlLabelList), to create a [`SplitData`](/data_block.htmlSplitData) in different ways. ###Code show_doc(LabelList.random_split_by_pct) show_doc(LabelList.split_by_files) show_doc(LabelList.split_by_fname_file) show_doc(LabelList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(LabelList.split_by_idx) show_doc(LabelList.split_by_valid_func) show_doc(SplitData, title_level=3) ###Output _____no_output_____ ###Markdown You won't normally construct a [`SplitData`](/data_block.htmlSplitData) yourself, but instead will use one of the `split*` methods in [`LabelList`](/data_block.htmlLabelList). ###Code show_doc(LabelList.from_csv) show_doc(SplitData.add_test) ###Output _____no_output_____ ###Markdown Create datasets To create the datasets from [`SplitData`](/data_block.htmlSplitData) we have the following class method. ###Code show_doc(SplitData.datasets) show_doc(SplitDatasets, title_level=3) ###Output _____no_output_____ ###Markdown This class can be constructed directly from one of the following factory methods. ###Code show_doc(SplitDatasets.from_single) show_doc(SplitDatasets.single_from_c) show_doc(SplitDatasets.single_from_classes) ###Output _____no_output_____ ###Markdown Then we can build the [`DataLoader`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.DataLoader) around our [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) like this. ###Code show_doc(SplitDatasets.dataloaders) ###Output _____no_output_____ ###Markdown The methods `img_transform` and `img_databunch` used earlier are documented in [`vision.data`](/vision.data.htmlvision.data). Utility classes and functions ###Code show_doc(ItemList, title_level=3) show_doc(PathItemList, title_level=3) show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(LabelList.split_by_list) show_doc(SplitData.dataset_cls) show_doc(InputList.create_label_list) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output You can deactivate this warning by passing `no_check=True`. ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) show_doc(ItemList.from_df) path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) show_doc(ItemList.from_df) path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file is returned in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perform on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How is the output above generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space (as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows us to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column (default to the third column of the dataframe). The examples put in the validation set correspond to the indices with `True` value in that column. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); if they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output [`MultiCategoryList`](/data_block.htmlMultiCategoryList). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store a list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the different unique labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a training set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and puts them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To be more precise, this function returns a list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation set? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? For each of those questions, you can have multiple possible blocks: your inputs might be in a folder, a csv file, a dataframe. You may want to split them randomly, by certain indexes or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may or may not have data augmentation to deal with. Or a test set. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing you total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin by our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. There is also a test set containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works, it can also be used for text or tabular data. With ouy sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the column 'label' of our csv. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and the labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels).The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of labels. ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing the floats in items for regression. Will add a `log` if this flag is `True`. Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single classificatio problem. ###Code show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single multi-classificatio problem. ###Code show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.from_lists) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(ItemLists.label_from_lists) show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(ItemList.get) show_doc(CategoryList.new) show_doc(LabelLists.get_processors) show_doc(LabelList.from_lists) show_doc(LabelList.set_item) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(LabelList.clear_item) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(LabelLists.process) show_doc(ItemLists.transform) show_doc(LabelList.process) show_doc(LabelList.transform) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(ItemList.get_label_cls) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(LabelList.transform_y) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file is returned in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perform on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How is the output above generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space (as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows us to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); if they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output [`MultiCategoryList`](/data_block.htmlMultiCategoryList). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store a list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the different unique labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a training set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and puts them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To be more precise, this function returns a list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output You can deactivate this warning by passing `no_check=True`. ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single classificatio problem. ###Code show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single multi-classificatio problem. ###Code show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.from_lists) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(ItemLists.label_from_lists) show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(ItemList.get) show_doc(CategoryList.new) show_doc(LabelLists.get_processors) show_doc(LabelList.from_lists) show_doc(LabelList.set_item) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(LabelLists.process) show_doc(ItemLists.transform) show_doc(LabelList.process) show_doc(LabelList.transform) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(ItemList.get_label_cls) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(LabelList.transform_y) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation set? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? For each of those questions, you can have multiple possible blocks: your inputs might be in a folder, a csv file, a dataframe. You may want to split them randomly, by certain indexes or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may or may not have data augmentation to deal with. Or a test set. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing you total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin by our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. There is also a test set containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works, it can also be used for text or tabular data. With ouy sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the column 'label' of our csv. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and the labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels).The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of labels. ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing the floats in items for regression. Will add a `log` if this flag is `True`. Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single classificatio problem. ###Code show_doc(MultiCategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single multi-classificatio problem. Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.from_lists) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(ItemLists.label_from_lists) show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(ItemList.get) show_doc(CategoryList.new) show_doc(LabelLists.get_processors) show_doc(LabelList.from_lists) show_doc(LabelList.set_item) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(LabelList.clear_item) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(LabelLists.process) show_doc(ItemLists.transform) show_doc(LabelList.process) show_doc(LabelList.transform) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(MultiCategoryProcessor.generate_classes) show_doc(CategoryProcessor.generate_classes) show_doc(ItemList.get_label_cls) show_doc(ItemLists.transform_y) show_doc(LabelList.to_df) show_doc(FloatList.reconstruct) show_doc(LabelList.to_csv) show_doc(LabelList.export) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(LabelList.load_empty) show_doc(CategoryList.reconstruct) show_doc(LabelList.transform_y) show_doc(ItemList.to_text) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.vision import * from fastai import * ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly:- where are the inputs- how to label them- how to split the data into a training and validation set- what type of [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) to create- possible transforms to apply- how to warp in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)This is a bit longer than using the factory methods but is way more flexible. As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in fodlers following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. With the data block API, the same thing is achieved like this: ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) data = (ImageFileList.from_folder(path) #Where to find the data? -> in path and its subfolders .label_from_folder() #How to label? -> depending on the folder of the filenames .split_by_folder() #How to split in train/valid? -> use the folders .add_test_folder() #Optionally add a test set .datasets(ImageClassificationDataset) #How to convert to datasets? -> use ImageClassificationDataset .transform(tfms, size=224) #Data augmetnation? -> use tfms with a size of 224 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.test_ds[0] data.show_batch(rows=3, figsize=(5,5)) data.valid_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageFileList.from_folder(planet) #Where to find the data? -> in planet and its subfolders .label_from_csv('labels.csv', sep=' ', folder='train', suffix='.jpg') #How to label? -> use the csv file labels.csv in path, #add .jpg to the names and take them in the folder train .random_split_by_pct() #How to split in train/valid? -> randomly with the defulat 20% in valid .datasets(ImageMultiDataset) #How to convert to datasets? -> use ImageMultiDataset .transform(planet_tfms, size=128) #Data augmetnation? -> use tfms with a size of 128 .databunch()) #Finally? -> use the defaults for conversion to databunch data.show_batch(rows=3, figsize=(10,8), is_train=False) ###Output _____no_output_____ ###Markdown This new API also allows to use datasets type for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, the road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. The new thing is that we use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as the ones applied to the image. ###Code data = (ImageFileList.from_folder(path_img) #Where are the input files? -> in path_img .label_from_func(get_y_fn) #How to label? -> use get_y_fn .random_split_by_pct() #How to split between train and valid? -> randomly .datasets(SegmentationDataset, classes=codes) #How to create a dataset? -> use SegmentationDataset .transform(get_transforms(), size=96, tfm_y=True) #Data aug -> Use standard tfms with tfm_y=True .databunch(bs=64)) #Lastly convert in a databunch. data.show_batch(rows=2, figsize=(5,5)) ###Output _____no_output_____ ###Markdown One last example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = {img:bb for img, bb in zip(images, lbl_bbox)} get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown Then it's very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ImageFileList.from_folder(coco) #Where are the images? -> in coco .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .datasets(ObjectDetectDataset) #How to create datasets? -> with ObjectDetectDataset #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=3, is_train=False, figsize=(8,7)) ###Output _____no_output_____ ###Markdown Give the inputs The inputs we want to feed our model are regrouped in the following class. It contains methods to then attribute labels to them. ###Code show_doc(InputList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) ###Code show_doc(InputList.from_folder) ###Output _____no_output_____ ###Markdown Note that [`InputList`](/data_block.htmlInputList) is subclassed in vision by [`ImageFileList`](/vision.data.htmlImageFileList) that changes the default of `extensions` to image file extensions (which is why we used [`ImageFileList`](/vision.data.htmlImageFileList) in our previous examples). Labelling the inputs All the followings are methods of [`InputList`](/data_block.htmlInputList). Note that some of them are primarly intended for inputs that are filenames and might not work in general situations. ###Code show_doc(InputList.label_from_csv) ###Output _____no_output_____ ###Markdown If a `folder` is specified, filenames are taken in `self.path/folder`. `suffix` is added. If `sep` is specified, splits the values in `label_col` accordingly. This method is intended for inputs that are filenames. ###Code jekyll_note("This method will only keep the filenames that are both present in the csv file and in `self.items`.") show_doc(InputList.label_from_df) jekyll_note("This method will only keep the filenames that are both present in the dataframe and in `self.items`.") show_doc(InputList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(InputList.label_from_func) ###Output _____no_output_____ ###Markdown This method is primarly intended for inputs that are filenames, but could work in other settings. ###Code show_doc(InputList.label_from_re) show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown A list of labelled inputs in `items` (expected to be tuples of input, label) with a `path` attribute. This class contains methods to create `SplitDataset`. Split the data between train and validation. The following functions are methods of [`LabelList`](/data_block.htmlLabelList), to create a [`SplitData`](/data_block.htmlSplitData) in different ways. ###Code show_doc(LabelList.random_split_by_pct) show_doc(LabelList.split_by_files) show_doc(LabelList.split_by_fname_file) show_doc(LabelList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(LabelList.split_by_idx) show_doc(SplitData, title_level=3) ###Output _____no_output_____ ###Markdown You won't normally construct a [`SplitData`](/data_block.htmlSplitData) yourself, but instead will use one of the `split*` methods in [`LabelList`](/data_block.htmlLabelList). ###Code show_doc(SplitData.datasets) show_doc(SplitData.add_test) show_doc(SplitData.add_test_folder) ###Output _____no_output_____ ###Markdown Create datasets To create the datasets from [`SplitData`](/data_block.htmlSplitData) we have the following class method. ###Code show_doc(SplitData.datasets) show_doc(SplitDatasets, title_level=3) ###Output _____no_output_____ ###Markdown This class can be constructed directly from one of the following factory methods. ###Code show_doc(SplitDatasets.from_single) show_doc(SplitDatasets.single_from_c) show_doc(SplitDatasets.single_from_classes) ###Output _____no_output_____ ###Markdown Then we can build the [`DataLoader`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.DataLoader) around our [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) like this. ###Code show_doc(SplitDatasets.dataloaders) ###Output _____no_output_____ ###Markdown The methods `img_transform` and `img_databunch` used earlier are documented in [`vision.data`](/vision.data.htmlvision.data). Utility classes ###Code show_doc(ItemList, title_level=3) show_doc(PathItemList, title_level=3) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation set? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? For each of those questions, you can have multiple possible blocks: your inputs might be in a folder, a csv file, a dataframe. You may want to split them randomly, by certain indexes or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may or may not have data augmentation to deal with. Or a test set. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing you total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin by our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. There is also a test set containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works, it can also be used for text or tabular data. With ouy sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the column 'label' of our csv. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and the labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels).The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of labels. ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing the floats in items for regression. Will add a `log` if this flag is `True`. Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single classificatio problem. ###Code show_doc(MultiCategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single multi-classificatio problem. Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.from_lists) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(ItemLists.label_from_lists) show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(ItemList.get) show_doc(CategoryList.new) show_doc(LabelLists.get_processors) show_doc(LabelList.from_lists) show_doc(LabelList.set_item) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(LabelList.clear_item) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(LabelLists.process) show_doc(ItemLists.transform) show_doc(LabelList.process) show_doc(LabelList.transform) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(MultiCategoryProcessor.generate_classes) show_doc(CategoryProcessor.generate_classes) show_doc(ItemList.get_label_cls) show_doc(ItemLists.transform_y) show_doc(LabelList.to_df) show_doc(FloatList.reconstruct) show_doc(LabelList.to_csv) show_doc(LabelList.export) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(LabelList.load_empty) show_doc(CategoryList.reconstruct) show_doc(LabelList.transform_y) show_doc(ItemList.to_text) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .split_by_rand_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .split_by_rand_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output You can deactivate this warning by passing `no_check=True`. ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags. ###Code jekyll_warn("One-hot encoded labels aren't supported yet. Items need to be lists of tags or a string with a corresponding `sep`.") show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file is returned in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perform on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How is the output above generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space (as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows us to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); if they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output [`MultiCategoryList`](/data_block.htmlMultiCategoryList). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store a list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the different unique labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a training set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and puts them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To be more precise, this function returns a list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.vision import * from fastai import * ###Output _____no_output_____ ###Markdown The data block API leps you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly:- where are the inputs- how to label them- how to split the data into a training and validation set- what type of [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) to create- possible transforms to apply- how to warp in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)This is a bit longer than using the factory methods but is way more flexible. As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code path = untar_data(URLs.MNIST_SAMPLE) tfms = get_transforms(do_flip=False) data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in fodlers following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. With the data block API, the same thing is achieved like this: ###Code path = untar_data(URLs.MNIST_SAMPLE) tfms = get_transforms(do_flip=False) data = (ImageFileList.from_folder(path) #Where to find the data? -> in path and its subfolders .label_from_folder() #How to label? -> depending on the folder of the filenames .split_by_folder() #How to split in train/valid? -> use the folders .datasets(ImageClassificationDataset) #How to convert to datasets? -> use ImageClassificationDataset .transform(tfms, size=224) #Data augmetnation? -> use tfms with a size of 224 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(rows=3, figsize=(5,5)) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = (ImageFileList.from_folder(planet) #Where to find the data? -> in planet and its subfolders .label_from_csv('labels.csv', sep=' ', folder='train', suffix='.jpg') #How to label? -> use the csv file labels.csv in path, #add .jpg to the names and take them in the folder train .random_split_by_pct() #How to split in train/valid? -> randomly with the defulat 20% in valid .datasets(ImageMultiDataset) #How to convert to datasets? -> use ImageMultiDataset .transform(tfms, size=128) #Data augmetnation? -> use tfms with a size of 128 .databunch()) #Finally? -> use the defaults for conversion to databunch data.show_batch(rows=3, figsize=(10,10)) ###Output _____no_output_____ ###Markdown This new API also allows to use datasets type for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, the road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. The new thing is that we use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as the ones applied to the image. ###Code data = (ImageFileList.from_folder(path_img) #Where are the input files? -> in path_img .label_from_func(get_y_fn) #How to label? -> use get_y_fn .random_split_by_pct() #How to split between train and valid? -> randomly .datasets(SegmentationDataset, classes=codes) #How to create a dataset? -> use SegmentationDataset .transform(get_transforms(), size=128, tfm_y=True) #Data aug? -> Use standard tfms with tfm_y=True .databunch(bs=64)) #Lastly convert in a databunch. data.show_batch(rows=2, figsize=(7,7)) ###Output _____no_output_____ ###Markdown One last example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(path/'train.json') img2bbox = {img:bb for img, bb in zip(images, lbl_bbox)} get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown Then it's very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ImageFileList.from_folder(coco) #Where are the images? -> in coco .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .datasets(ObjectDetectDataset) #How to create datasets? -> with ObjectDetectDataset .transform(tfms, size=128, tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=3) ###Output _____no_output_____ ###Markdown Give the inputs The inputs we want to feed our model are regrouped in the following class. It contains methods to then attribute labels to them. ###Code show_doc(InputList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) ###Code show_doc(InputList.from_folder) ###Output _____no_output_____ ###Markdown Note that [`InputList`](/data_block.htmlInputList) is subclassed in vision by [`ImageFileList`](/vision.data.htmlImageFileList) that changes the default of `extensions` to image file extensions (which is why we used [`ImageFileList`](/vision.data.htmlImageFileList) in our previous examples). Labelling the inputs All the followings are methods of [`InputList`](/data_block.htmlInputList). Note that some of them are primarly intended for inputs that are filenames and might not work in general situations. ###Code show_doc(InputList.label_from_csv) jekyll_note("This method will only keep the filenames that are both present in the csv file and in `self.items`.") show_doc(InputList.label_from_df) jekyll_note("This method will only keep the filenames that are both present in the dataframe and in `self.items`.") show_doc(InputList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(InputList.label_from_func) show_doc(InputList.label_from_re) show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown A list of labelled inputs in `items` (expected to be tuples of input, label) with a `path` attribute. This class contains methods to create `SplitDataset`. In future development, it will contain factory methods to directly create a [`LabelList`](/data_block.htmlLabelList) from a source of labelled data (a csv file or a dataframe with inputs and labels) for instance. Split the data between train and validation. The following functions are methods of [`LabelList`](/data_block.htmlLabelList), to create a [`SplitData`](/data_block.htmlSplitData) in different ways. ###Code show_doc(LabelList.random_split_by_pct) show_doc(LabelList.split_by_files) show_doc(LabelList.split_by_fname_file) show_doc(LabelList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(LabelList.split_by_idx) show_doc(SplitData, title_level=3) ###Output _____no_output_____ ###Markdown Create datasets To create the datasets from [`SplitData`](/data_block.htmlSplitData) we have the following class method. ###Code show_doc(SplitData.datasets) show_doc(SplitDatasets, title_level=3) ###Output _____no_output_____ ###Markdown This class can be constructed directly from one of the following factory methods. ###Code show_doc(SplitDatasets.from_single) show_doc(SplitDatasets.single_from_c) show_doc(SplitDatasets.single_from_classes) ###Output _____no_output_____ ###Markdown Then we can build the [`DataLoader`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.DataLoader) around our [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) like this. ###Code show_doc(SplitDatasets.dataloaders) ###Output _____no_output_____ ###Markdown The methods `img_transform` and `img_databunch` used earlier are documented in [`vision.data`](/vision.data.htmlvision.data). Utility classes ###Code show_doc(ItemList, title_level=3) show_doc(PathItemList, title_level=3) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels).The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of labels. ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing the floats in items for regression. Will add a `log` if this flag is `True`. Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single classificatio problem. ###Code show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single multi-classificatio problem. ###Code show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(ItemLists.label_from_lists) show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(ItemList.get) show_doc(CategoryList.new) show_doc(LabelLists.get_processors) show_doc(LabelList.set_item) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(LabelLists.process) show_doc(ItemLists.transform) show_doc(LabelList.process) show_doc(LabelList.transform) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(ItemList.get_label_cls) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(LabelList.transform_y) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) show_doc(ItemList.from_df) path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.vision import * from fastai import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation set? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? For each of those questions, you can have multiple possible blocks: your inputs might be in a folder, a csv file, a dataframe. You may want to split them randomly, by certain indexes or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may or may not have data augmentation to deal with. Or a test set. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing you total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin by our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. There is also a test set containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works, it can also be used for text or tabular data. With ouy sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the column 'label' of our csv. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() from fastai.tabular import * ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add [`PreProcessor`](/data_block.htmlPreProcessor) that are going to be applied to our data once the splitting and the labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- changing the `create_func` (example: opening images with your custom function and not [`open_image`](/vision.image.htmlopen_image))- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation.If this isn't the case and you really need to write your own class, here is what you should code:```class MyCustomItemList(): If you need custom arguments you will have to overwrite __init__ and new like this. def __init__(self, items:Iterator, my_args, **kwargs): super().__init__(items, **kwargs) store my args, initialize what is needed. def new(self, items:Iterator, **kwargs)->'NumericalizedTextList': Retrive your custom args stored and send them to new like this return super().new(items=items, my_args, **kwargs) This is how to get your data stored at index i def get(self, i): o = super().get(i) return what you need from o```You can add custom splitting or labelling methods if you need them. ###Code show_doc(ItemList.predict) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). ###Code show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of labels. ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing the floats in items for regression. Will add a `log` if this flag is `True`. Invisible step: preprocessing This isn't seen tehre in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. A processor is a transformation that is applied to all the inputs once and for all, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single classificatio problem. ###Code show_doc(MultiCategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single multi-classificatio problem. Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.from_lists) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(ItemLists.label_from_lists) show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(ItemList.get) show_doc(CategoryList.new) show_doc(ItemList.label_cls) show_doc(LabelLists.get_processors) show_doc(LabelList.from_lists) show_doc(LabelList.set_item) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(LabelList.clear_item) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(LabelLists.process) show_doc(CategoryList.predict) show_doc(ItemLists.transform) show_doc(LabelList.process) show_doc(LabelList.transform) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(LabelList.filter_missing_y) show_doc(FloatList.new) show_doc(MultiCategoryProcessor.generate_classes) show_doc(CategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.vision import * from fastai import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly:- where are the inputs- how to label them- how to split the data into a training and validation set- what type of [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) to create- possible transforms to apply- how to warp in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)This is a bit longer than using the factory methods but is way more flexible. As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use To do:- make imdb unsup filter work- ?LabelList class methods ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. With the data block API, the same thing is achieved like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to use dataset types for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn) .transform(get_transforms(), tfm_y=True) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown One last example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown Provide inputs The inputs we want to feed our model are regrouped in the following class. The class contains methods to get the corresponding labels. ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) ###Code show_doc(ItemList.from_folder) ###Output _____no_output_____ ###Markdown Note that [`InputList`](/data_block.htmlInputList) is subclassed in vision by [`ImageFileList`](/vision.data.htmlImageFileList) that changes the default of `extensions` to image file extensions (which is why we used [`ImageFileList`](/vision.data.htmlImageFileList) in our previous examples). Labelling the inputs All the followings are methods of [`InputList`](/data_block.htmlInputList). Note that some of them are primarly intended for inputs that are filenames and might not work in general situations. ###Code show_doc(InputList.label_const) show_doc(InputList.label_from_csv) ###Output _____no_output_____ ###Markdown If a `folder` is specified, filenames are taken in `self.path/folder`. `suffix` is added. If `sep` is specified, splits the values in `label_col` accordingly. This method is intended for inputs that are filenames. ###Code jekyll_note("This method will only keep the filenames that are both present in the csv file and in `self.items`.") show_doc(InputList.label_from_df) jekyll_note("This method will only keep the filenames that are both present in the dataframe and in `self.items`.") show_doc(InputList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(InputList.label_from_func) ###Output _____no_output_____ ###Markdown This method is primarly intended for inputs that are filenames, but could work in other settings. ###Code show_doc(InputList.label_from_re) show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown A list of labelled inputs in `items` (expected to be tuples of input, label) with a `path` attribute. This class contains methods to create `SplitDataset`. ###Code show_doc(LabelList.from_csv) show_doc(LabelList.from_csvs) show_doc(LabelList.from_df) ###Output _____no_output_____ ###Markdown Split the data between train and validation. The following functions are methods of [`LabelList`](/data_block.htmlLabelList), to create a [`SplitData`](/data_block.htmlSplitData) in different ways. ###Code show_doc(LabelList.random_split_by_pct) show_doc(LabelList.split_by_files) show_doc(LabelList.split_by_fname_file) show_doc(LabelList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(LabelList.split_by_idx) show_doc(LabelList.split_by_valid_func) show_doc(SplitData, title_level=3) ###Output _____no_output_____ ###Markdown You won't normally construct a [`SplitData`](/data_block.htmlSplitData) yourself, but instead will use one of the `split*` methods in [`LabelList`](/data_block.htmlLabelList). ###Code show_doc(LabelList.from_csv) show_doc(SplitData.add_test) ###Output _____no_output_____ ###Markdown Create datasets To create the datasets from [`SplitData`](/data_block.htmlSplitData) we have the following class method. ###Code show_doc(SplitData.datasets) show_doc(SplitDatasets, title_level=3) ###Output _____no_output_____ ###Markdown This class can be constructed directly from one of the following factory methods. ###Code show_doc(SplitDatasets.from_single) show_doc(SplitDatasets.single_from_c) show_doc(SplitDatasets.single_from_classes) ###Output _____no_output_____ ###Markdown Then we can build the [`DataLoader`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.DataLoader) around our [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) like this. ###Code show_doc(SplitDatasets.dataloaders) ###Output _____no_output_____ ###Markdown The methods `img_transform` and `img_databunch` used earlier are documented in [`vision.data`](/vision.data.htmlvision.data). Utility classes and functions ###Code show_doc(ItemList, title_level=3) show_doc(PathItemList, title_level=3) show_doc(get_files) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.vision import * from fastai import * ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly:- where are the inputs- how to label them- how to split the data into a training and validation set- what type of [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) to create- possible transforms to apply- how to warp in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)This is a bit longer than using the factory methods but is way more flexible. As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code path = untar_data(URLs.MNIST_SAMPLE) tfms = get_transforms(do_flip=False) data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown *ImageDataBunch* works with data in directories that follow the ImageNet style. In this style there is a *train* subdirectory and a *valid* subdirectory, each containing one subdirectory per class. The deepest subdirectories contain all the picture files. Here is the code for the code for the *data block* API to achieve the same result as the code in the cell above. ###Code path = untar_data(URLs.MNIST_SAMPLE) tfms = get_transforms(do_flip=False) data = (ImageFileList.from_folder(path) #Where to find the data? -> in path and its subfolders .label_from_folder() #How to label? -> depending on the folder of the filenames .split_by_folder() #How to split in train/valid? -> use the folders .datasets(ImageClassificationDataset) #How to convert to datasets? -> use ImageClassificationDataset .transform(tfms, size=224) #Data augmetnation? -> use tfms with a size of 224 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(rows=3, figsize=(5,5)) data.valid_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageFileList.from_folder(planet) #Where to find the data? -> in planet and its subfolders .label_from_csv('labels.csv', sep=' ', folder='train', suffix='.jpg') #How to label? -> use the csv file labels.csv in path, #add .jpg to the names and take them in the folder train .random_split_by_pct() #How to split in train/valid? -> randomly with the defulat 20% in valid .datasets(ImageMultiDataset) #How to convert to datasets? -> use ImageMultiDataset .transform(planet_tfms, size=128) #Data augmetnation? -> use tfms with a size of 128 .databunch()) #Finally? -> use the defaults for conversion to databunch data.show_batch(rows=3, figsize=(10,10), is_train=False) ###Output _____no_output_____ ###Markdown This new API also allows us to use dataset types for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding masks are in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road, etc.). ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (ImageFileList.from_folder(path_img) #Where are the input files? -> in path_img .label_from_func(get_y_fn) #How to label? -> use get_y_fn .random_split_by_pct() #How to split between train and valid? -> randomly .datasets(SegmentationDataset, classes=codes) #How to create a dataset? -> use SegmentationDataset .transform(get_transforms(), size=96, tfm_y=True) #Data aug -> Use standard tfms with tfm_y=True .databunch(bs=64)) #Lastly convert in a databunch. data.show_batch(rows=2, figsize=(5,5)) ###Output _____no_output_____ ###Markdown One last example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the fastai library that reads the annotation file and returns the list of image names with the associated list of labelled bboxes. Next we convert the lists to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = {img:bb for img, bb in zip(images, lbl_bbox)} get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. Our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ImageFileList.from_folder(coco) #Where are the images? -> in coco .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .datasets(ObjectDetectDataset) #How to create datasets? -> with ObjectDetectDataset #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=3, is_train=False, figsize=(8,7)) ###Output _____no_output_____ ###Markdown Provide inputs The inputs we want to feed our model are regrouped in the following class. The class contains methods to get the corresponding labels. ###Code show_doc(InputList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) ###Code show_doc(InputList.from_folder) ###Output _____no_output_____ ###Markdown Note that [`InputList`](/data_block.htmlInputList) is subclassed in vision by [`ImageFileList`](/vision.data.htmlImageFileList) that changes the default of `extensions` to image file extensions (which is why we used [`ImageFileList`](/vision.data.htmlImageFileList) in our previous examples). Labelling the inputs All the followings are methods of [`InputList`](/data_block.htmlInputList). Note that some of them are primarly intended for inputs that are filenames and might not work in general situations. ###Code show_doc(InputList.label_from_csv) jekyll_note("This method will only keep the filenames that are both present in the csv file and in `self.items`.") show_doc(InputList.label_from_df) jekyll_note("This method will only keep the filenames that are both present in the dataframe and in `self.items`.") show_doc(InputList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(InputList.label_from_func) show_doc(InputList.label_from_re) show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown A list of labelled inputs in `items` (expected to be tuples of input, label) with a `path` attribute. This class contains methods to create `SplitDataset`. In future development, it will contain factory methods to directly create a [`LabelList`](/data_block.htmlLabelList) from a source of labelled data (a csv file or a dataframe with inputs and labels) for instance. Split the data between train and validation. The following functions are methods of [`LabelList`](/data_block.htmlLabelList), to create a [`SplitData`](/data_block.htmlSplitData) in different ways. ###Code show_doc(LabelList.random_split_by_pct) show_doc(LabelList.split_by_files) show_doc(LabelList.split_by_fname_file) show_doc(LabelList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(LabelList.split_by_idx) show_doc(SplitData, title_level=3) ###Output _____no_output_____ ###Markdown Create datasets To create the datasets from [`SplitData`](/data_block.htmlSplitData) we have the following class method. ###Code show_doc(SplitData.datasets) show_doc(SplitDatasets, title_level=3) ###Output _____no_output_____ ###Markdown This class can be constructed directly from one of the following factory methods. ###Code show_doc(SplitDatasets.from_single) show_doc(SplitDatasets.single_from_c) show_doc(SplitDatasets.single_from_classes) ###Output _____no_output_____ ###Markdown Then we can build the [`DataLoader`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.DataLoader) around our [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) like this. ###Code show_doc(SplitDatasets.dataloaders) ###Output _____no_output_____ ###Markdown The methods `img_transform` and `img_databunch` used earlier are documented in [`vision.data`](/vision.data.htmlvision.data). Utility classes ###Code show_doc(ItemList, title_level=3) show_doc(PathItemList, title_level=3) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) il_data[1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data ###Output _____no_output_____ ###Markdown Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) ###Output _____no_output_____ ###Markdown Let's try this functions for cifar-10. In every folder, we have our image files that are named as 0001.png, 0002.png. So we have 10 images of 0001.png (one for each class) in the train folder. `valid_names` can be a list containing names of your images that you want to place in your validation set. __Note__ :- Here in `valid_names` you need to specify the image name and not the image path. So for `/path/to/image.png`, we only need to add `image.png` in our valid_names. ###Code path = untar_data(URLs.CIFAR) path.ls() data = (ImageList.from_folder(path) .split_by_files(valid_names=['0001.png', '0002.png'])) data ###Output _____no_output_____ ###Markdown In the Valid we can see 40 images (20 images of 0001.png, 20 images of 0002.png from both train and valid folders) ###Code show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.vision import * from fastai import * ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly:- where are the inputs- how to label them- how to split the data into a training and validation set- what type of [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) to create- possible transforms to apply- how to warp in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)This is a bit longer than using the factory methods but is way more flexible. As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in fodlers following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. With the data block API, the same thing is achieved like this: ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) ###Output _____no_output_____ ###Markdown - InputList- LabelList- SplitData- SplitDatasets ###Code data = (ImageFileList.from_folder(path) #Where to find the data? -> in path and its subfolders .label_from_folder() #How to label? -> depending on the folder of the filenames .split_by_folder() #How to split in train/valid? -> use the folders .add_test_folder() #Optionally add a test set .datasets() #How to convert to datasets? -> use ImageClassificationDataset .transform(tfms, size=224) #Data augmentation? -> use tfms with a size of 224 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.test_ds[0] data.show_batch(rows=3, figsize=(5,5)) data.valid_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageFileList.from_folder(planet) #Where to find the data? -> in planet and its subfolders .label_from_csv('labels.csv', sep=' ', folder='train', suffix='.jpg') #How to label? -> use the csv file labels.csv in path, #add .jpg to the names and take them in the folder train .random_split_by_pct() #How to split in train/valid? -> randomly with the defulat 20% in valid .datasets() #How to convert to datasets? -> use ImageMultiDataset .transform(planet_tfms, size=128) #Data augmetnation? -> use tfms with a size of 128 .databunch()) #Finally? -> use the defaults for conversion to databunch data.show_batch(rows=3, figsize=(10,8), ds_type=DatasetType.Valid) ###Output _____no_output_____ ###Markdown This new API also allows to use datasets type for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (ImageFileList.from_folder(path_img) #Where are the input files? -> in path_img .label_from_func(get_y_fn) #How to label? -> use get_y_fn .random_split_by_pct() #How to split between train and valid? -> randomly .datasets(SegmentationDataset, classes=codes) #How to create a dataset? -> use SegmentationDataset .transform(get_transforms(), size=96, tfm_y=True) #Data aug -> Use standard tfms with tfm_y=True .databunch(bs=64)) #Lastly convert in a databunch. data.show_batch(rows=2, figsize=(5,5)) ###Output _____no_output_____ ###Markdown One last example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = {img:bb for img, bb in zip(images, lbl_bbox)} get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ImageFileList.from_folder(coco) #Where are the images? -> in coco .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .datasets(ObjectDetectDataset) #How to create datasets? -> with ObjectDetectDataset #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=3, ds_type=DatasetType.Valid, figsize=(8,7)) ###Output _____no_output_____ ###Markdown Provide inputs The inputs we want to feed our model are regrouped in the following class. The class contains methods to get the corresponding labels. ###Code show_doc(InputList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) ###Code show_doc(InputList.from_folder) ###Output _____no_output_____ ###Markdown Note that [`InputList`](/data_block.htmlInputList) is subclassed in vision by [`ImageFileList`](/vision.data.htmlImageFileList) that changes the default of `extensions` to image file extensions (which is why we used [`ImageFileList`](/vision.data.htmlImageFileList) in our previous examples). Labelling the inputs All the followings are methods of [`InputList`](/data_block.htmlInputList). Note that some of them are primarly intended for inputs that are filenames and might not work in general situations. ###Code show_doc(InputList.label_from_csv) ###Output _____no_output_____ ###Markdown If a `folder` is specified, filenames are taken in `self.path/folder`. `suffix` is added. If `sep` is specified, splits the values in `label_col` accordingly. This method is intended for inputs that are filenames. ###Code jekyll_note("This method will only keep the filenames that are both present in the csv file and in `self.items`.") show_doc(InputList.label_from_df) jekyll_note("This method will only keep the filenames that are both present in the dataframe and in `self.items`.") show_doc(InputList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(InputList.label_from_func) ###Output _____no_output_____ ###Markdown This method is primarly intended for inputs that are filenames, but could work in other settings. ###Code show_doc(InputList.label_from_re) show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown A list of labelled inputs in `items` (expected to be tuples of input, label) with a `path` attribute. This class contains methods to create `SplitDataset`. Split the data between train and validation. The following functions are methods of [`LabelList`](/data_block.htmlLabelList), to create a [`SplitData`](/data_block.htmlSplitData) in different ways. ###Code show_doc(LabelList.random_split_by_pct) show_doc(LabelList.split_by_files) show_doc(LabelList.split_by_fname_file) show_doc(LabelList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(LabelList.split_by_idx) show_doc(SplitData, title_level=3) ###Output _____no_output_____ ###Markdown You won't normally construct a [`SplitData`](/data_block.htmlSplitData) yourself, but instead will use one of the `split*` methods in [`LabelList`](/data_block.htmlLabelList). ###Code show_doc(SplitData.datasets) show_doc(SplitData.add_test) show_doc(SplitData.add_test_folder) ###Output _____no_output_____ ###Markdown Create datasets To create the datasets from [`SplitData`](/data_block.htmlSplitData) we have the following class method. ###Code show_doc(SplitData.datasets) show_doc(SplitDatasets, title_level=3) ###Output _____no_output_____ ###Markdown This class can be constructed directly from one of the following factory methods. ###Code show_doc(SplitDatasets.from_single) show_doc(SplitDatasets.single_from_c) show_doc(SplitDatasets.single_from_classes) ###Output _____no_output_____ ###Markdown Then we can build the [`DataLoader`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.DataLoader) around our [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) like this. ###Code show_doc(SplitDatasets.dataloaders) ###Output _____no_output_____ ###Markdown The methods `img_transform` and `img_databunch` used earlier are documented in [`vision.data`](/vision.data.htmlvision.data). Utility classes ###Code show_doc(ItemList, title_level=3) show_doc(PathItemList, title_level=3) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.vision import * from fastai import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly:- where are the inputs- how to split the data into a training and validation set- how to label them- possible transforms to apply- how to warp in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)This is a bit longer than using the factory methods but is way more flexible. As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin by our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. With the data block API, the same thing is achieved like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to use dataset types for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown One last example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown Provide inputs The inputs we want to feed our model are regrouped in the following class. The class contains methods to get the corresponding labels. ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Split the data The following functions are methods of [`ItemList`](/data_block.htmlItemList), to create an [`ItemLists`](/data_block.htmlItemLists) in different ways. ###Code show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown Labelling the inputs All the followings are methods of [`ItemList`](/data_block.htmlItemList) ([`ItemLists`](/data_block.htmlItemLists) delegates them to each one of its [`ItemList`](/data_block.htmlItemList)). Note that some of them are primarly intended for inputs that are filenames and might not work in general situations. ###Code show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown This method is primarly intended for inputs that are filenames, but could work in other settings. ###Code show_doc(ItemList.label_from_re) show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.process) show_doc(LabelList.transform) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs/targets, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(LabelLists.add_test) show_doc(LabelLists.add_test_folder) show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Preprocessing Preprocessing is a step that happens after the data has been split and labelled, where the inputs and targets go through a bunch of [`PreProcessor`](/data_block.htmlPreProcessor). ###Code show_doc(PreProcessor, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file is returned in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perform on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How is the output above generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space (as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows us to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column (default to the third column of the dataframe). The examples put in the validation set correspond to the indices with `True` value in that column. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); if they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output [`MultiCategoryList`](/data_block.htmlMultiCategoryList). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store a list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the different unique labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a training set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and puts them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To be more precise, this function returns a list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.vision import * from fastai import * ###Output _____no_output_____ ###Markdown The data block API lets you customize how to create a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly:- where are the inputs- how to label them- how to split the data into a training and validation set- what type of [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) to create- possible transforms to apply- how to warp in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)This is a bit longer than using the factory methods but is way more flexible. As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in fodlers following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. With the data block API, the same thing is achieved like this: ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() data = (ImageFileList.from_folder(path) #Where to find the data? -> in path and its subfolders .label_from_folder() #How to label? -> depending on the folder of the filenames .split_by_folder() #How to split in train/valid? -> use the folders .add_test_folder() #Optionally add a test set .datasets() #How to convert to datasets? .transform(tfms, size=224) #Data augmentation? -> use tfms with a size of 224 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.train_ds[0] data.show_batch(rows=3, figsize=(5,5)) data.valid_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageFileList.from_folder(planet) #Where to find the data? -> in planet and its subfolders .label_from_csv('labels.csv', sep=' ', folder='train', suffix='.jpg') #How to label? -> use the csv file labels.csv in path, #add .jpg to the names and take them in the folder train .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .datasets() #How to convert to datasets? -> use ImageMultiDataset .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally? -> use the defaults for conversion to databunch data.show_batch(rows=3, figsize=(10,8)) ###Output _____no_output_____ ###Markdown The data block API also allows you to use dataset types for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (ImageFileList.from_folder(path_img) #Where are the input files? -> in path_img .label_from_func(get_y_fn) #How to label? -> use get_y_fn .random_split_by_pct() #How to split between train and valid? -> randomly .datasets(SegmentationDataset, classes=codes) #How to create a dataset? -> use SegmentationDataset .transform(get_transforms(), size=96, tfm_y=True) #Data aug -> Use standard tfms with tfm_y=True .databunch(bs=64)) #Lastly convert in a databunch. data.show_batch(rows=2, figsize=(5,5)) ###Output _____no_output_____ ###Markdown One last example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = {img:bb for img, bb in zip(images, lbl_bbox)} get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ImageFileList.from_folder(coco) #Where are the images? -> in coco .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .datasets(ObjectDetectDataset) #How to create datasets? -> with ObjectDetectDataset #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=3, ds_type=DatasetType.Valid, figsize=(8,7)) ###Output _____no_output_____ ###Markdown Provide inputs The inputs we want to feed our model are regrouped in the following class. The class contains methods to get the corresponding labels. ###Code show_doc(InputList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) ###Code show_doc(InputList.from_folder) ###Output _____no_output_____ ###Markdown Note that [`InputList`](/data_block.htmlInputList) is subclassed in vision by [`ImageFileList`](/vision.data.htmlImageFileList) that changes the default of `extensions` to image file extensions (which is why we used [`ImageFileList`](/vision.data.htmlImageFileList) in our previous examples). Labelling the inputs All the followings are methods of [`InputList`](/data_block.htmlInputList). Note that some of them are primarly intended for inputs that are filenames and might not work in general situations. ###Code show_doc(InputList.label_from_csv) ###Output _____no_output_____ ###Markdown If a `folder` is specified, filenames are taken in `self.path/folder`. `suffix` is added. If `sep` is specified, splits the values in `label_col` accordingly. This method is intended for inputs that are filenames. ###Code jekyll_note("This method will only keep the filenames that are both present in the csv file and in `self.items`.") show_doc(InputList.label_from_df) jekyll_note("This method will only keep the filenames that are both present in the dataframe and in `self.items`.") show_doc(InputList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(InputList.label_from_func) ###Output _____no_output_____ ###Markdown This method is primarly intended for inputs that are filenames, but could work in other settings. ###Code show_doc(InputList.label_from_re) show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown A list of labelled inputs in `items` (expected to be tuples of input, label) with a `path` attribute. This class contains methods to create `SplitDataset`. Split the data between train and validation. The following functions are methods of [`LabelList`](/data_block.htmlLabelList), to create a [`SplitData`](/data_block.htmlSplitData) in different ways. ###Code show_doc(LabelList.random_split_by_pct) show_doc(LabelList.split_by_files) show_doc(LabelList.split_by_fname_file) show_doc(LabelList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(LabelList.split_by_idx) show_doc(SplitData, title_level=3) ###Output _____no_output_____ ###Markdown You won't normally construct a [`SplitData`](/data_block.htmlSplitData) yourself, but instead will use one of the `split*` methods in [`LabelList`](/data_block.htmlLabelList). ###Code show_doc(SplitData.datasets) show_doc(SplitData.add_test) ###Output _____no_output_____ ###Markdown Create datasets To create the datasets from [`SplitData`](/data_block.htmlSplitData) we have the following class method. ###Code show_doc(SplitData.datasets) show_doc(SplitDatasets, title_level=3) ###Output _____no_output_____ ###Markdown This class can be constructed directly from one of the following factory methods. ###Code show_doc(SplitDatasets.from_single) show_doc(SplitDatasets.single_from_c) show_doc(SplitDatasets.single_from_classes) ###Output _____no_output_____ ###Markdown Then we can build the [`DataLoader`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.DataLoader) around our [`Dataset`](https://pytorch.org/docs/stable/data.htmltorch.utils.data.Dataset) like this. ###Code show_doc(SplitDatasets.dataloaders) ###Output _____no_output_____ ###Markdown The methods `img_transform` and `img_databunch` used earlier are documented in [`vision.data`](/vision.data.htmlvision.data). Utility classes ###Code show_doc(ItemList, title_level=3) show_doc(PathItemList, title_level=3) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the `train` and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels).The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of labels. ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing the floats in items for regression. Will add a `log` if this flag is `True`. Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick one of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, here is what you should code:```class MyCustomItemList(): If you need custom arguments you will have to overwrite __init__ and new like this. def __init__(self, items:Iterator, my_args, **kwargs): super().__init__(items, **kwargs) store my args, initialize what is needed. def new(self, items:Iterator, **kwargs)->'NumericalizedTextList': Retrive your custom args stored and send them to new like this return super().new(items=items, my_args, **kwargs) This is how to get your data stored at index i def get(self, i): o = super().get(i) return what you need from o```You can add custom splitting or labelling methods if you need them. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels).The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of labels. ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing the floats in items for regression. Will add a `log` if this flag is `True`. Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single classificatio problem. ###Code show_doc(MultiCategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single multi-classificatio problem. Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.from_lists) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(ItemLists.label_from_lists) show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(ItemList.get) show_doc(CategoryList.new) show_doc(LabelLists.get_processors) show_doc(LabelList.from_lists) show_doc(LabelList.set_item) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(LabelList.clear_item) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(LabelLists.process) show_doc(ItemLists.transform) show_doc(LabelList.process) show_doc(LabelList.transform) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(MultiCategoryProcessor.generate_classes) show_doc(CategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ###Code show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_testlearn.validate()```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai import * from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we create an easy [`DataBunch`](/basic_data.htmlDataBunch) suitable for classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ###Output _____no_output_____ ###Markdown This is aimed at data that is in folders following an ImageNet style, with the `train` and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ###Code data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.train_ds[0], data.test_ds.classes ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ###Code data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works, it can also be used for text or tabular data. With ouy sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labelling is done. Here we use the column 'label' of our csv. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and the labelling is done. ###Code adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = '>=50k' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...) `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.htmlImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageItemList`](/vision.data.htmlImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`TextList`](/text.data.htmlTextList) for text data - [`TextFilesList`](/text.data.htmlTextFilesList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels).The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of labels. ###Code show_doc(FloatList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing the floats in items for regression. Will add a `log` if this flag is `True`. Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single classificatio problem. ###Code show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) ###Output _____no_output_____ ###Markdown [`PreProcessor`](/data_block.htmlPreProcessor) that will convert labels to codes usings `classes` (if passed) in a single multi-classificatio problem. ###Code show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ###Output _____no_output_____ ###Markdown Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown The basic dataset in fastai. Inputs are in `x`, targets in `y`. Optionally apply `tfms` to `x` and also `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.load_empty) show_doc(LabelList.from_lists) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(ItemLists, doc_string=False, title_level=3) ###Output _____no_output_____ ###Markdown Data in `path` split between several streams of inputs, [`train`](/train.htmltrain), `valid` and maybe `test`. ###Code show_doc(ItemLists.label_from_lists) show_doc(LabelLists, title_level=3, doc_string=False) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(ItemList.get) show_doc(CategoryList.new) show_doc(LabelLists.get_processors) show_doc(LabelList.from_lists) show_doc(LabelList.set_item) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(LabelLists.process) show_doc(ItemLists.transform) show_doc(LabelList.process) show_doc(LabelList.transform) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(ItemList.get_label_cls) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(LabelList.transform_y) show_doc(CategoryList.analyze_pred) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, [`ItemList.get_label_cls`](/data_block.htmlItemList.get_label_cls) basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output [`CategoryList`](/data_block.htmlCategoryList); they are of type float, then it will output [`FloatList`](/data_block.htmlFloatList); if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an [`ItemList`](/data_block.htmlItemList) and puts all the function outputs into a list, and then passes the list onto [`ItemList._label_from_list`](/data_block.htmlItemList._label_from_list). Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). [`CategoryList`](/data_block.htmlCategoryList) uses `labels` to create an [`ItemList`](/data_block.htmlItemList) for dealing with categorical labels. Behind the scenes, [`CategoryList`](/data_block.htmlCategoryList) is a subclass of [`CategoryListBase`](/data_block.htmlCategoryListBase) which is a subclass of [`ItemList`](/data_block.htmlItemList). [`CategoryList`](/data_block.htmlCategoryList) inherits from [`CategoryListBase`](/data_block.htmlCategoryListBase) the properties such as `classes` (default as `None`), `filter_missing_y` (default as `True`), and has its own unique property `loss_func` (default as `CrossEntropyFlat()`), and its own class attribute `_processor` (default as [`CategoryProcessor`](/data_block.htmlCategoryProcessor)). ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.y.items, ll.train.y.classes, ll.train.y[0] cl = CategoryList(ll.train.y.items, ll.train.y.classes); cl ###Output _____no_output_____ ###Markdown For the behavior of printing out [`CategoryList`](/data_block.htmlCategoryList) object or access an element using index, please see [`CategoryList.get`](/data_block.htmlCategoryList.get) below. Behind the scenes, [`CategoryList.get`](/data_block.htmlCategoryList.get) is used inexplicitly when printing out the [`CategoryList`](/data_block.htmlCategoryList) object or `cl[idx]`. According to the source of [`CategoryList.get`](/data_block.htmlCategoryList.get), each `item` is used to get its own `class`. When 'classes' is a list of strings, then elements of `items` are used as index of a list, therefore they must be integers in the range from 0 to `len(classes)-1`; if `classes` is a dictionary, then elements of `items` are used as keys, therefore they can be strings too. See examples below for details. ###Code from fastai.vision import * items = np.array([0, 1, 2, 1, 0]) cl = CategoryList(items, classes=['3', '7', '9']); cl items = np.array(['3', '7', '9', '7', '3']) classes = {'3':3, '7':7, '9':9} cl = CategoryList(items, classes); cl show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown `ds`: an object of [`ItemList`](/data_block.htmlItemList) Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(CategoryProcessor.process) ###Output _____no_output_____ ###Markdown `ds` is an object of [`CategoryList`](/data_block.htmlCategoryList). It basically generates a list of unique labels (assigned to `ds.classes`) and a dictionary mapping `classes` to indexes (assigned to `ds.c2i`). It is an internal function only called to apply processors to training, validation and testing datasets after the labeling step. ###Code show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. Behind the scenes, it takes inputs [`ItemList`](/data_block.htmlItemList) and labels [`ItemList`](/data_block.htmlItemList) as its properties `x` and `y`, sets property `item` to `None`, and uses [`LabelList.transform`](/data_block.htmlLabelList.transform) to apply a list of transforms `TfmList` to `x` and `y` if `tfm_y` is set `True`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path_data).split_by_folder('train', 'valid').label_from_folder() ll.train.x, ll.train.y LabelList(x=ll.train.x, y=ll.train.y) show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) ###Output _____no_output_____ ###Markdown Behind the scenes, [`LabelList.process`](/data_block.htmlLabelList.process) does 3 three things: 1. ask labels `y` to be processed by `yp` with `y.process(yp)`; 2. if `y.filter_missing_y` is `True`, then removes the missing data samples from `x` and `y`; 3. ask inputs `x` to be processed by `xp` with `x.process(xp)` ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp sd.train.process(xp, yp) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) ###Output _____no_output_____ ###Markdown Behind the scenes, `LabelLists.get_processors()` first puts `train.x._processor` classes and `train.y._processor` classes into separate lists, and then instantiates those processors and put them into `xp` and `yp`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid') sd.train = sd.train.label_from_folder(from_item_lists=True) sd.valid = sd.valid.label_from_folder(from_item_lists=True) sd.__class__ = LabelLists xp,yp = sd.get_processors() xp,yp show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) show_doc(ItemList.process) ###Output _____no_output_____ ###Markdown `processor` is one or more `PreProcessors` objects Behind the scenes, we put all of `processor` into a list and apply them all to an object of [`ItemList`](/data_block.htmlItemList) or its subclasses. Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) ###Output _____no_output_____ ###Markdown It basically converts `item` which is a category name to an index. `classes`: a list of unique and sorted labels; It creates the inner mapping from category name to index (stored in `c2i`) from the `classes`. ###Code show_doc(CategoryProcessor.create_classes) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large [`ImageList`](/vision.data.htmlImageList) into two smaller [`ImageList`](/vision.data.htmlImageList)s, one for training set and the other for validation set. Both [`ImageList`](/vision.data.htmlImageList)s are attached to a large [`ItemLists`](/data_block.htmlItemLists) which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s, and finally attached to a [`ItemLists`](/data_block.htmlItemLists). ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two [`ImageList`](/vision.data.htmlImageList)s, and then pass onto `split_by_list` to split `il` into two [`ImageList`](/vision.data.htmlImageList)s and attach to a [`ItemLists`](/data_block.htmlItemLists). ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two [`ImageList`](/vision.data.htmlImageList)s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` ([`ItemLists`](/data_block.htmlItemLists)) to initialize an [`ItemLists`](/data_block.htmlItemLists) object, which basically takes in the training, valiation and testing (optionally) [`ImageList`](/vision.data.htmlImageList)s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown Behind the scenes, `ItemList.get_label_cls` basically select a label class according to the item type of `labels`, whereas `labels` can be any of `Collection`, `pandas.core.frame.DataFrame`, `pandas.core.series.Series`. If the list elements are of type string or integer, `get_label_cls` will output `CategoryList`; they are of type float, then it will output `FloatList`; if they are of type Collection, then it will output `MultiCateogryList`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid'); sd labels = ['7', '3'] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7, 3] label_cls = sd.train.get_label_cls(labels); label_cls labels = [7.0, 3.0] label_cls = sd.train.get_label_cls(labels); label_cls labels = [[7, 3],] label_cls = sd.train.get_label_cls(labels); label_cls labels = [['7', '3'],] label_cls = sd.train.get_label_cls(labels); label_cls ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") ###Output _____no_output_____ ###Markdown Behind the scenes, when an [`ItemList`](/data_block.htmlItemList) calls `label_from_folder`, it creates a lambda function which outputs a foldername which a file Path object immediately or directly belongs to, and then calls `label_from_func` with the lambda function as input. On the practical and high level, `label_from_folder` is mostly used with [`ItemLists`](/data_block.htmlItemLists) rather than [`ItemList`](/data_block.htmlItemList) for simplicity and efficiency, for details see the `label_from_folder` example on [ItemLists](). Even when you just want a training set [`ItemList`](/data_block.htmlItemList), you still need to do `split_none` to create an [`ItemLists`](/data_block.htmlItemLists) and then do labeling with `label_from_folder`, as the example shown below. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() sd_train = ImageList.from_folder(path_data/'train').split_none() ll_train = sd_train.label_from_folder(); ll_train show_doc(ItemList.label_from_func) ###Output _____no_output_____ ###Markdown Inside `label_from_func`, it applies the input `func` to every item of an `ItemList` and puts all the function outputs into a list, and then passes the list onto `ItemList._label_from_list`. Below is a simple example of using `label_from_func`. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) sd = ImageList.from_folder(path_data).split_by_folder('train', 'valid');sd func=lambda o: (o.parts if isinstance(o, Path) else o.split(os.path.sep))[-2] ###Output _____no_output_____ ###Markdown The lambda function above is to access the immediate foldername for a file Path object. ###Code ll = sd.label_from_func(func); ll show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an [`ItemLists`](/data_block.htmlItemLists) object, which basically brings in the training, valiation and testing (optionally) [`ItemList`](/data_block.htmlItemList)s as its properties. It also offers helpful warning messages on situations when the training or validation [`ItemList`](/data_block.htmlItemList) is empty. See the following example for how to create an [`ItemLists`](/data_block.htmlItemLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') il_test = ImageList.from_folder(path_data/'test') ils = ItemLists(path=path_data, train=il_train, valid=il_valid); ils ils.test = il_test; ils ###Output _____no_output_____ ###Markdown However, we are most likely to see an [`ItemLists`](/data_block.htmlItemLists), right after a large [`ItemList`](/data_block.htmlItemList) is splitted and turned into an [`ItemLists`](/data_block.htmlItemLists) by methods like [`ItemList.split_by_folder`](/data_block.htmlItemList.split_by_folder). Then, we will add labels to all training and validation simply using `sd.label_from_folder()` (`sd` is an [`ItemLists`](/data_block.htmlItemLists), see example below). Now, some of you may be surprised because `label_from_folder` is a method of [`ItemList`](/data_block.htmlItemList) not [`ItemLists`](/data_block.htmlItemLists). Well, this is a magic of fastai data_block api.With the following example, we may understand a little better how to get labelling done by calling [`ItemLists.__getattr__`](/data_block.htmlItemLists.__getattr__) with [`ItemList.label_from_folder`](/data_block.htmlItemList.label_from_folder). ###Code il = ImageList.from_folder(path_data); il ###Output _____no_output_____ ###Markdown An [`ItemList`](/data_block.htmlItemList) or its subclass object must do a split to turn itself into an [`ItemLists`](/data_block.htmlItemLists) before doing labeling to become a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code sd = il.split_by_folder(train='train', valid='valid'); sd ll = sd.label_from_folder(); ll ###Output _____no_output_____ ###Markdown Even when there is just an [`ImageList`](/vision.data.htmlImageList) from a traning set folder with no split needed, we still must do `split_none()` in order to create an [`ItemLists`](/data_block.htmlItemLists), and only then we can do `ItemLists.label_from_folder()` nicely. ###Code il_train = ImageList.from_folder(path_data/'train') sd_train = il_train.split_none(); sd_train ll_valid_empty = sd_train.label_from_folder(); ll_valid_empty ###Output _____no_output_____ ###Markdown So practially, although `label_from_folder` is not an [`ItemLists`](/data_block.htmlItemLists) method, we can call `ItemLists.label_from_folder()` to label training, validation and test [`ItemList`](/data_block.htmlItemList)s once for all. Behind the scenes, `ItemLists.label_from_folder()` actually calls `ItemLists.__getattr__('label_from_folder')`, in which all training, validation even testing [`ItemList`](/data_block.htmlItemList) get to call `label_from_folder`, and then turns the [`ItemLists`](/data_block.htmlItemLists) into a [`LabelLists`](/data_block.htmlLabelLists) and calls [`LabelLists.process`](/data_block.htmlLabelLists.process) at last.You can directly use `LabelLists.__getattr__` to do labelling as below. ###Code ld_inner = sd.__getattr__('label_from_folder'); ld_inner() show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) ###Output _____no_output_____ ###Markdown Creating a [`LabelLists`](/data_block.htmlLabelLists) object is exactly the same way as creating an [`ItemLists`](/data_block.htmlItemLists) object, because its base class is [`ItemLists`](/data_block.htmlItemLists) and does not overwrite [`ItemLists.__init__`](/data_block.htmlItemLists.__init__). The example below shows how to build a [`LabelLists`](/data_block.htmlLabelLists) object. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_train = ImageList.from_folder(path_data/'train') il_valid = ImageList.from_folder(path_data/'valid') ll_test = LabelLists(path_data, il_train, il_valid); ll_test.test = il_valid = ImageList.from_folder(path_data/'test') ll_test show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____ ###Markdown The data block API ###Code from fastai.gen_doc.nbdoc import * from fastai.basics import * np.random.seed(42) ###Output _____no_output_____ ###Markdown The data block API lets you customize the creation of a [`DataBunch`](/basic_data.htmlDataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.htmlDataBunch)? Each of these may be addressed with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.htmlDataBunch) (batch size, collate function...)The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.htmlDataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.htmlDataBunch) are great for beginners but you can't always make your data fit in the tracks they require.As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. Examples of use Let's begin with our traditional MNIST example. ###Code from fastai.vision import * path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ###Output _____no_output_____ ###Markdown In [`vision.data`](/vision.data.htmlvision.data), we can create a [`DataBunch`](/basic_data.htmlDataBunch) suitable for image classification by simply typing: ###Code data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=64) ###Output _____no_output_____ ###Markdown This is a shortcut method which is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.htmltrain) and `valid` directories, each containing one subdirectory per class, where all the labelled pictures are. There is also a `test` directory containing unlabelled pictures. Here is the same code, but this time using the data block API, which can work with any style of a dataset. All the stages, which will be explained below, can be grouped together like this: ###Code data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch ###Output _____no_output_____ ###Markdown Now we can look at the created DataBunch: ###Code data.show_batch(3, figsize=(6,6), hide_axis=False) ###Output _____no_output_____ ###Markdown Let's look at another example from [`vision.data`](/vision.data.htmlvision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ###Code planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) pd.read_csv(planet/"labels.csv").head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms) ###Output _____no_output_____ ###Markdown With the data block API we can rewrite this like that: ###Code planet.ls() pd.read_csv(planet/"labels.csv").head() data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(label_delim=' ') #How to label? -> use the second column of the csv file and split the tags by ' ' .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ###Output _____no_output_____ ###Markdown The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.htmlImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.htmlDataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ###Code camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ###Output _____no_output_____ ###Markdown We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ###Code codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ###Output _____no_output_____ ###Markdown And we define the following function that infers the mask filename from the image filename. ###Code get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ###Output _____no_output_____ ###Markdown Then we can easily define a [`DataBunch`](/basic_data.htmlDataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. Side note: For further control over which transformations are used on the target, each transformation has a `use_on_y` parameter ###Code data = (SegmentationItemList.from_folder(path_img) #Where to find the data? -> in path_img and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_fn, classes=codes) #How to label? -> use the label function on the file name of the data .transform(get_transforms(), tfm_y=True, size=128) #Data augmentation? -> use tfms with a size of 128, also transform the label images .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(7,5)) ###Output _____no_output_____ ###Markdown Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ###Code coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ###Output _____no_output_____ ###Markdown The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ###Code data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ###Output _____no_output_____ ###Markdown But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ###Code from fastai.text import * imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList .from_csv(imdb, 'texts.csv', cols='text') #Where are the text? Column 'text' of texts.csv .split_by_rand_pct() #How to split it? Randomly with the default 20% in valid .label_for_lm() #Label it for a language model .databunch()) #Finally we convert to a DataBunch data_lm.show_batch() ###Output _____no_output_____ ###Markdown For a classification problem, we just have to change the way labeling is done. Here we use the csv column `label`. ###Code data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ###Output _____no_output_____ ###Markdown Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.htmlPreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ###Code from fastai.tabular import * adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ###Output _____no_output_____ ###Markdown Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.htmlItemList)). ###Code show_doc(ItemList, title_level=3) ###Output _____no_output_____ ###Markdown This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `label_cls` will be called to create the labels from the result of the label function, `inner_df` is an underlying dataframe, and `processor` is to be applied to the inputs after the splitting and labeling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.htmlCategoryList) for labels in classification - [`MultiCategoryList`](/data_block.htmlMultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.htmlFloatList) for float labels in a regression problem - [`ImageList`](/vision.data.htmlImageList) for data that are images - [`SegmentationItemList`](/vision.data.htmlSegmentationItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) - [`SegmentationLabelList`](/vision.data.htmlSegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.htmlObjectItemList) like [`ImageList`](/vision.data.htmlImageList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.htmlPointsItemList) for points (of the type [`ImagePoints`](/vision.image.htmlImagePoints)) - [`ImageImageList`](/vision.data.htmlImageImageList) for image to image tasks - [`TextList`](/text.data.htmlTextList) for text data - [`TextList`](/text.data.htmlTextList) for text data stored in files - [`TabularList`](/tabular.data.htmlTabularList) for tabular data - [`CollabList`](/collab.htmlCollabList) for collaborative filtering We can get a little glimpse of how [`ItemList`](/data_block.htmlItemList)'s basic attributes and methods behave with the following code examples. ###Code from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY) il_data = ItemList.from_folder(path_data, extensions=['.csv']) il_data ###Output _____no_output_____ ###Markdown Here is how to access the path of [`ItemList`](/data_block.htmlItemList) and the actual `items` (here files) in the path. ###Code il_data.path il_data.items ###Output _____no_output_____ ###Markdown `len(il_data)` gives you the count of files inside `il_data` and you can access individual items using index. ###Code len(il_data) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) returns a single item with a single index, but returns an [`ItemList`](/data_block.htmlItemList) if given a list of indexes. ###Code il_data[1] il_data[:1] ###Output _____no_output_____ ###Markdown With `il_data.add` we can perform in_place concatenate another [`ItemList`](/data_block.htmlItemList) object. ###Code il_data.add(il_data); il_data from fastai.vision import * path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist = ItemList.from_folder(path_data/'test') itemlist ###Output _____no_output_____ ###Markdown As we can see, the files do not necesarily return in alpha-numeric order by default. In the above: 1503.png, ... 617.png, 585.png ...This is OK when you're always using the same machine, as the same dataset should return in the same order. But when building a datablock on one machine (say GCP) and then porting the same code to a different machine (say your laptop) that same dataset and code might return the files in a different order.Since all random operations use the loaded order of the dataset as the starting point, you will not be able to replicate any random operations, say randomly splitting the data into 80% train, and 20% validation, even while correctly seeding.The solution is to use `presort=True` in the `.from_folder()` method. As can be seen below, with that argument turned on, the file return in ascending order, and this behavior will match across machines and across platforms. Now you can reproduce any random operation you perfrom on the loaded data. ###Code itemlist = ItemList.from_folder(path_data/'test', presort=True) itemlist ###Output _____no_output_____ ###Markdown How does such output above is generated?behind the scenes, executing `itemlist` calls [`ItemList.__repr__`](/data_block.htmlItemList.__repr__) which basically prints out `itemlist[0]` to `itemlist[4]` ###Code itemlist[0] ###Output _____no_output_____ ###Markdown and `itemlist[0]` basically calls `itemlist.get(0)` which returns `itemlist.items[0]`. That's why we have outputs like above. Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ###Code show_doc(ItemList.from_folder) path = untar_data(URLs.MNIST_TINY) path.ls() ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown `path` is your root data folder. In the `path` directory you have _train_ and _valid_ folders which would contain your images. For the below example, _train_ folder contains two folders/classes _cat_ and _dog_. ###Code show_doc(ItemList.from_df) ###Output _____no_output_____ ###Markdown Dataframe has 2 columns. The first column is the path to the image and the second column contains label id for that image. In case you have multi-labels (i.e more than one label for a single image), you will have a space(as determined by `label_delim` argument of `label_from_df`) seperated string in the labels column.`from_df` and `from_csv` can be used in a more general way. In cases you are not able to figure out how to get your ImageList, it is very easy to make a csv file with the above format.How to set `path`? `path` refers to your root data directory. So the paths in your csv file should be relative to `path` and not absolute paths. In the below example, in _labels.csv_ the paths to the images are __path + train/3/7463.png__ ###Code path = untar_data(URLs.MNIST_SAMPLE) path.ls() df = pd.read_csv(path/'labels.csv') df.head() ImageList.from_df(df, path) show_doc(ItemList.from_csv) path = untar_data(URLs.MNIST_SAMPLE) path.ls() ImageList.from_csv(path, 'labels.csv') ###Output _____no_output_____ ###Markdown Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ###Code show_doc(ItemList.filter_by_func) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown Suppose that you only want to keep images with a suffix ".png". Well, this method will do magic for you. ###Code Path(df.name[0]).suffix ImageList.from_df(df, path).filter_by_func(lambda fname: Path(fname).suffix == '.png') show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).filter_by_rand(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) show_doc(ItemList.to_text) path = untar_data(URLs.MNIST_SAMPLE) pd.read_csv(path/'labels.csv').head() file_name = "item_list.txt" ImageList.from_folder(path).to_text(file_name) ! cat {path/file_name} | head show_doc(ItemList.use_partial_data) path = untar_data(URLs.MNIST_SAMPLE) ImageList.from_folder(path).use_partial_data(0.5) ###Output _____no_output_____ ###Markdown Contrast the number of items with the list created without the filter. ###Code ImageList.from_folder(path) ###Output _____no_output_____ ###Markdown Writing your own [`ItemList`](/data_block.htmlItemList) First check if you can't easily customize one of the existing subclass by:- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)- applying a custom `processor` (see step 4)- changing the default `label_cls` for the label creation- adding a default [`PreProcessor`](/data_block.htmlPreProcessor) with the `_processor` class variableIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ###Code show_doc(ItemList.analyze_pred) show_doc(ItemList.get) ###Output _____no_output_____ ###Markdown We will have a glimpse of how `get` work with the following demo. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il_data_base = ItemList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_base ###Output _____no_output_____ ###Markdown `get` is used inexplicitly within `il_data_base[15]`. `il_data_base.get(15)` gives the same result here, because its defulat it's to return that. ###Code il_data_base[15] ###Output _____no_output_____ ###Markdown While creating your custom [`ItemList`](/data_block.htmlItemList) however, you can override this function to do some things to your item (like opening an image). ###Code il_data_image = ImageList.from_folder(path=path_data, extensions=['.png'], include=['test']) il_data_image ###Output _____no_output_____ ###Markdown Again, normally `get` is used inexplicitly within `il_data_image[15]`. ###Code il_data_image[15] ###Output _____no_output_____ ###Markdown The reason why an image is printed out instead of a FilePath object, is [`ImageList.get`](/vision.data.htmlImageList.get) overwrites [`ItemList.get`](/data_block.htmlItemList.get) and use [`ImageList.open`](/vision.data.htmlImageList.open) to print an image. ###Code show_doc(ItemList.new) ###Output _____no_output_____ ###Markdown You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. We will get a feel of how `new` works with the following examples. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() itemlist1 = ItemList.from_folder(path=path_data/'valid', extensions=['.png']) itemlist1 ###Output _____no_output_____ ###Markdown As you will see below, `copy_new` allows use to borrow any argument and its value from `itemlist1`, and `itemlist1.new(itemlist1.items)` allows us to use `items` and arguments inside `copy_new` to create another [`ItemList`](/data_block.htmlItemList) by calling [`ItemList.__init__`](/data_block.htmlItemList.__init__). ###Code itemlist1.copy_new == ['x', 'label_cls', 'path'] ((itemlist1.x == itemlist1.label_cls == itemlist1.inner_df == None) and (itemlist1.path == Path('/Users/Natsume/.fastai/data/mnist_tiny/valid'))) ###Output _____no_output_____ ###Markdown You can select any argument from [`ItemList.__init__`](/data_block.htmlItemList.__init__)'s signature and change their values. ###Code itemlist1.copy_new = ['x', 'label_cls', 'path', 'inner_df'] itemlist1.x = itemlist1.label_cls = itemlist1.path = itemlist1.inner_df = 'test' itemlist2 = itemlist1.new(items=itemlist1.items) (itemlist2.inner_df == itemlist2.x == itemlist2.label_cls == 'test' and itemlist2.path == Path('test')) show_doc(ItemList.reconstruct) ###Output _____no_output_____ ###Markdown Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ###Code show_doc(ItemList.split_none) show_doc(ItemList.split_by_rand_pct) show_doc(ItemList.split_subsets) ###Output _____no_output_____ ###Markdown This function is handy if you want to work with subsets of specific sizes, e.g., you want to use 20% of the data for the validation dataset, but you only want to train on a small subset of the rest of the data: `split_subsets(train_size=0.08, valid_size=0.2)`. ###Code show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) ###Output _____no_output_____ ###Markdown Internally makes a call to `split_by_files`. `fname` contains your image file names like 0001.png. ###Code show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") ###Output _____no_output_____ ###Markdown Basically, `split_by_folder` takes in two folder names ('train' and 'valid' in the following example), to split `il` the large `ImageList` into two smaller `ImageList`s, one for training set and the other for validation set. Both `ImageList`s are attached to a large `ItemLists` which is the final output of `split_by_folder`. ###Code path_data = untar_data(URLs.MNIST_TINY); path_data.ls() il = ItemList.from_folder(path=path_data); il sd = il.split_by_folder(train='train', valid='valid'); sd ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_folder` uses `_get_by_folder(name)`, to turn both 'train' and 'valid' folders into two list of indexes, and pass them onto `split_by_idxs` to split `il` into two `ImageList`s, and finally attached to a `ItemLists`. ###Code train_idx = il._get_by_folder(name='train') train_idx[:5], train_idx[-5:], len(train_idx) valid_idx = il._get_by_folder(name='valid') valid_idx[:5], valid_idx[-5:],len(valid_idx) ###Output _____no_output_____ ###Markdown By the way, `_get_by_folder(name)` works in the following way, first, index the entire `il.items`, loop every item and if an item belongs to the named folder, e.g., 'train', then put it into a list. The folder `name` is the only input, and output is the list. ###Code show_doc(ItemList.split_by_idx) path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') df.head() ###Output _____no_output_____ ###Markdown You can pass a list of indices that you want to put in the validation set like [1, 3, 10]. Or you can pass a contiguous list like `list(range(1000))` ###Code data = (ImageList.from_df(df, path) .split_by_idx(list(range(1000)))) data show_doc(ItemList.split_by_idxs) ###Output _____no_output_____ ###Markdown Behind the scenes, `split_by_idxs` turns two index lists (`train_idx` and `valid_idx`) into two `ImageList`s, and then pass onto `split_by_list` to split `il` into two `ImageList`s and attach to a `ItemLists`. ###Code sd = il.split_by_idxs(train_idx=train_idx, valid_idx=valid_idx); sd show_doc(ItemList.split_by_list) ###Output _____no_output_____ ###Markdown `split_by_list` takes in two `ImageList`s which in the case below are `il[train_idx]` and `il[valid_idx]`, and pass them onto `_split` (`ItemLists`) to initialize an `ItemLists` object, which basically takes in the training, valiation and testing (optionally) `ImageList`s as its properties. ###Code sd = il.split_by_list(train=il[train_idx], valid=il[valid_idx]); sd ###Output _____no_output_____ ###Markdown This is more of an internal method, you should be using `split_by_files` if you want to pass a list of filenames for the validation set. ###Code show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) ###Output _____no_output_____ ###Markdown To use this function, you need a boolean column `is_valid`. If `is_valid[index] = True`, then that example is put in the validation set and if `is_valid[index] = False` the example is put in the training set. ###Code path = untar_data(URLs.MNIST_SAMPLE) df = pd.read_csv(path/'labels.csv') # Create a new column for is_valid df['is_valid'] = [True]*(df.shape[0]//2) + [False]*(df.shape[0]//2) # Randomly shuffle dataframe df = df.reindex(np.random.permutation(df.index)) print(df.shape) df.head() data = (ImageList.from_df(df, path) .split_from_df()) data jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ###Output _____no_output_____ ###Markdown Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.htmlItemList), and if there is none, it will go to [`CategoryList`](/data_block.htmlCategoryList), [`MultiCategoryList`](/data_block.htmlMultiCategoryList) or [`FloatList`](/data_block.htmlFloatList) depending on the type of the labels). This is implemented in the following function: ###Code show_doc(ItemList.get_label_cls) ###Output _____no_output_____ ###Markdown If no `label_cls` argument is passed, the correct labeling type can usually be inferred based on the data (for classification or regression). If you have multiple regression targets (e.g. predict 5 different numbers from a single image/text), be aware that arrays of floats are by default considered to be targets for one-hot encoded classification. If your task is regression, be sure the pass `label_cls = FloatList` so that learners created from your databunch initialize correctly. The first example in these docs created labels as follows: ###Code path = untar_data(URLs.MNIST_TINY) ll = ImageList.from_folder(path).split_by_folder().label_from_folder().train ###Output _____no_output_____ ###Markdown If you want to save the data necessary to recreate your [`LabelList`](/data_block.htmlLabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:```pythonll.train.to_csv('tmp.csv')```Or just grab a `pd.DataFrame` directly: ###Code ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ###Output _____no_output_____ ###Markdown [`ItemList`](/data_block.htmlItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.htmlCategoryProcessor). ###Code show_doc(MultiCategoryList, title_level=3) ###Output _____no_output_____ ###Markdown It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ###Code show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ###Output _____no_output_____ ###Markdown Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.htmlItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.htmlPreProcessor) classes).A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.htmlPreProcessor) and applied on the validation set.This is the generic class for all processors. ###Code show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ###Output _____no_output_____ ###Markdown Process one `item`. This method needs to be written in any subclass. ###Code show_doc(PreProcessor.process) ###Output _____no_output_____ ###Markdown Process a dataset. This default to apply `process_one` on every `item` of `ds`. ###Code show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ###Output _____no_output_____ ###Markdown Optional steps Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ###Code show_doc(LabelLists.transform) ###Output _____no_output_____ ###Markdown This is primary for the vision application. The `kwargs` arguments are the ones expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.For examples see: [vision.transforms](vision.transform.html). Add a test set To add a test set, you can use one of the two following methods. ###Code show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) jekyll_warn("In fastai the test set is unlabeled! No labels will be collected even if they are available.") ###Output _____no_output_____ ###Markdown Instead, either the passed `label` argument or an empty label will be used for all entries of this dataset (this is required by the internal pipeline of fastai). In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to validate your model on a `test` dataset with labels, you probably need to use it as a validation set, as in:```data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...)```Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:```tfms = []path = Path('data').resolve()data = (ImageList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = cnn_learner(data, models.resnet50, metrics=accuracy)learn.fit_one_cycle(5,1e-2) now replace the validation dataset entry with the test dataset as a new validation dataset: everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` (or perhaps you were already using the latter, so simply switch to valid='test')data_test = (ImageList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.validate(data_test.valid_dl)```Of course, your data block can be totally different, this is just an example. Step 4: convert to a [`DataBunch`](/basic_data.htmlDataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.htmlDataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.htmlDataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ###Code show_doc(LabelLists.databunch) ###Output _____no_output_____ ###Markdown Inner classes ###Code show_doc(LabelList, title_level=3) ###Output _____no_output_____ ###Markdown Optionally apply `tfms` to `y` if `tfm_y` is `True`. ###Code show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) ###Output _____no_output_____ ###Markdown It initializes an `ItemLists` object, which basically brings in the training, valiation and testing (optionally) `ItemList`s as its properties. It also offers helpful warning messages on situations when the training set `ItemList` or the validation one is empty. ###Code show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ###Output _____no_output_____ ###Markdown Helper functions ###Code show_doc(get_files) ###Output _____no_output_____ ###Markdown To to more precise, this function returns list of FilePath objects using files in `path` that must have a suffix in `extensions`, and hidden folders and files are ignored. If `recurse=True`, all files in subfolders will be applied; `include` is used to select particular folders to apply.Inside [`get_files`](/data_block.htmlget_files), there is [`_get_files`](/data_block.html_get_files) which turns all filenames inside `f` from directory `parent/p` into a list of FilePath objects. All filenames must have a suffix in `extensions`. All hidden files are ignored. ###Code path_data = untar_data(URLs.MNIST_TINY) path_data.ls() ###Output _____no_output_____ ###Markdown With `recurse=False`, no subfolder files are made available. ###Code list_FilePath_noRecurse = get_files(path_data) list_FilePath_noRecurse ###Output _____no_output_____ ###Markdown With `recurse=True`, all subfolder files are made available, except hidden files. ###Code list_FilePath_recurse = get_files(path_data, recurse=True) list_FilePath_recurse[:3] list_FilePath_recurse[-2:] ###Output _____no_output_____ ###Markdown With `extensions=['.csv']`, only files with the suffix of `.csv` are made available. ###Code list_FilePath_recurse_csv = get_files(path_data, recurse=True, extensions=['.csv']) list_FilePath_recurse_csv ###Output _____no_output_____ ###Markdown With `include=['test']`, only files in `path_data` and its subfolder `test` are made available. ###Code list_FilePath_include = get_files(path_data, recurse=True, extensions=['.png','.jpg','.jpeg'], include=['test']) list_FilePath_include[:3] list_FilePath_include[-3:] ###Output _____no_output_____ ###Markdown Undocumented Methods - Methods moved below this line will intentionally be hidden ###Code show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ###Output _____no_output_____ ###Markdown New Methods - Please document or move to the undocumented section ###Code show_doc(ItemList.add) ###Output _____no_output_____
posts/am105/sympy_dsolve.ipynb
###Markdown Use `dsolve` in `sympy` to solve differential equations ###Code from sympy.solvers.ode import dsolve t = sp.var('t') f = sp.Function('f') f_ = sp.Derivative(f(t), t) f_ f__ = sp.Derivative(f(t),t,t) f__ ode = f__ - 4*f_ + 4 ode dsolve(ode,f(t)) dsolve(ode,f(t), ics={f(0):sp.E, f(4):sp.E}) ###Output _____no_output_____
notebooks/Hello World.ipynb
###Markdown This is a test notebook ![](https://octodex.github.com/images/daftpunktocat-thomas.gif) ###Code print('Hello World') ###Output Hello World
examples/sample_annn_pkg_001.ipynb
###Markdown Load modules load sample module ###Code import sample_annn_pkg as sap ###Output _____no_output_____ ###Markdown check version ###Code sap.__version__ ###Output _____no_output_____ ###Markdown Samples sample001 ###Code sap.func02() ###Output success!! poyo ###Markdown sample002 ###Code # load csv df0 = sap.datasets.load_sample_data0() df0 ###Output load sample data0 file format: csv sample pandas.DataFrame: col1 col2 col3 0 1 2 3 1 4 5 6 2 7 8 9 ###Markdown sample003 ###Code # load excel df1 = sap.datasets.load_sample_data1() df1 ###Output load sample data1 file format: excel sample pandas.DataFrame: col4 col5 col6 0 hoge 10 11 1 fuga 12 13 2 poyo 14 15 3 piyo 16 17 ###Markdown sample004 ###Code try: sap.func01 except Exception as e: print(e) ###Output module 'sample_annn_pkg' has no attribute 'func01' ###Markdown sample005 ###Code pc = sap.PoyoClass() print(pc.get_hoge()) print('*' * 20) pc.set_hoge(123) print(pc.get_hoge()) ###Output hoge num: 100 100 ******************** hoge num: 123 123
Assignment/.ipynb_checkpoints/All_Assignment-checkpoint.ipynb
###Markdown 1 ImplementationYou should write an environment that implements the game Easy21. Specifically, write a function, named step, which takes as input a state *s* (dealer\`s first card 1-10 and the player\`s sum 1-21), and an action *a* (hit or stick), and returns a sample of the next state *s*' (which may be terminal if the game is finished) and reward *r*. We will be using this environment for model-free reinforcement learning, and you should not explicitly represent the transition matrix for the MDP. There is no discouting (gamma = 1). You should treat the dealer\`s moves as part of the environment, i.e. calling *step* with a *stick* action will play out the dealer\`s cards and return the final reward and terminal state. __Environment__- Step Function * state s (dealerValue and playerValue) * action a (hit or stick) * return (next state s' and reward *r* and terminal state)* no discounting factor (gamma = 1) ###Code class EasyEnv(object): def __init__(self): self.lowerbound = 1 self.upperbound = 21 # 1 is hit and 0 is stick self.actions = [0, 1] def initGame(self): self.playerValue = np.random.randint(1, 11) self.dealerValue = np.random.randint(1, 11) def draw(self): card_value = np.random.randint(1, 11) if round(np.random.rand(), 2) <= 0.3: return -card_value else: return card_value def get_state(self): return self.playerValue, self.dealerValue def step(self, playerValue, dealerValue, action): # Hit if action == 1: playerValue += self.draw() if playerValue > self.upperbound or playerValue < self.lowerbound: reward = -1 terminated = True else: reward = 0 terminated = False else: # Player Action is Stick. Dealer`s turn. while dealerValue < 17: dealerValue += self.draw() if dealerValue > self.upperbound or dealerValue < self.lowerbound or playerValue > dealerValue: reward = 1 elif playerValue == dealerValue: reward = 0 else: reward = -1 terminated = True return playerValue, dealerValue, reward, terminated ###Output _____no_output_____ ###Markdown 2 MonteCarloApply Monte-Carlo control to Easy21. Initialise the value function to zero.Use a time-varying scalar step-size of alpha_t = 1/N(s_t, a_t) and an epsilon-greedy exploration strategy with epsilon_t = N_0 / (N_0 + N(s_t)), where N_0 = 100 is a constant, N(s) is the number of times that state s has been visited, and N(s, a) is the number of times that action a has been selected from state s. Feel free to choose an alternative value for N_0, if it helps producing better results.Plot the optimal value function V\*(s) = max_aQ\*(s, a)using similar axes to the following figure taken from Sutton and Barto\`s Blackjack example __Value Function__- Initialise : zero__step-size__- alpha_t = 1/N(s_t, a_t)__epsilon-greedy exploration__- epsilon_t = N_0/(N_0 + N(s_t)), N_0 = 100 is a constantN(s): number of times that state __s__ has been visited. N(s, a): number of times that action __a__ has been selected from state s.__Optimal value function__ V\*(s) = max_aQ\*(s, a) Plot the optimal value function ###Code import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D N_0 = 100 actions = [0, 1] # 0 is Stick, 1 is Hit. N_s_dict = {} # Number of times that state s has been visited. N_sa_dict = {} # Number of times that action a has been selected from state s. Q_sa_dict = {} # Action-State Value V_s_dict = {} # State Value def calc_epsilon(N_s): if N_s not in N_s_dict.keys(): N_s_dict[N_s] = 0 epsilon_t = N_0/(N_0 + N_s_dict[N_s]) return epsilon_t def calc_alpha(N_sa): alpha = 1 / N_sa return alpha def epsilonGreedy(pValue, dValue): epsilon = calc_epsilon((pValue, dValue)) if (pValue, dValue, 0) not in Q_sa_dict.keys(): Q_sa_dict[(pValue, dValue, 0)] = 0 if (pValue, dValue, 1) not in Q_sa_dict.keys(): Q_sa_dict[(pValue, dValue, 1)] = 0 max_action = np.argmax([Q_sa_dict[pValue, dValue, act] for act in actions]) #RandomPolicy if epsilon is 1: return np.random.choice(actions) # Exploitation if round(np.random.rand(), 2) > epsilon: return max_action # Explore else: if max_action: return 0 else: return 1 env = EasyEnv() episodes = 1000000 for episode in range(episodes): terminated = False H = [] env.initGame() pValue, dValue = env.get_state() reward = 0 while not terminated: if (pValue, dValue) not in N_s_dict.keys(): N_s_dict[(pValue, dValue)] = 0 N_s_dict[(pValue, dValue)] += 1 action = epsilonGreedy(pValue, dValue) if (pValue, dValue, action) not in N_sa_dict.keys(): N_sa_dict[(pValue, dValue, action)] = 1 else: N_sa_dict[(pValue, dValue, action)] += 1 pPrime, dPrime, reward, terminated = env.step(pValue, dValue, action) H.append([pValue, dValue, action, reward]) pValue, dValue = pPrime, dPrime G = reward for (pValue, dValue, action, _) in H: alpha = calc_alpha(N_sa_dict[(pValue, dValue, action)]) if (pValue, dValue, action) not in Q_sa_dict.keys(): Q_sa_dict[(pValue, dValue, action)] = 0 Q_sa_dict[(pValue, dValue, action)] += alpha*(G - Q_sa_dict[(pValue, dValue, action)]) Q_sa_dict for i in range(1, 22): for j in range(1, 11): if Q_sa_dict[(i, j, 0)] > Q_sa_dict[(i, j, 1)]: V_s_dict[(i, j)] = Q_sa_dict[(i, j, 0)] else: V_s_dict[(i, j)] = Q_sa_dict[(i, j, 1)] V_s_dict z = [] for k, v in V_s_dict.items(): z.append(v) z = np.array(z) z = z.reshape(21, 10) x = np.arange(1, 11) y = np.arange(1, 22) xs, ys = np.meshgrid(x, y) z = np.array(z) fig = plt.figure() ax = plt.axes(projection='3d') ax.plot_surface(xs, ys, z, rstride=1, cstride=1, cmap='viridis') plt.show() ###Output _____no_output_____ ###Markdown 3 TD LearningImplement Sarsa(Lamda) in 21s. Initialise the value function to zero. Use the same step-szie and exploration schedules as in the previous section. Run the algorithm with parameter values Lamda = {0, 0.1, 0.2, ..., 1}. Stop each run after 1000 episodes and report the mean-squared error Sigma_s,a(Q(s, a) - Q*(s, a))^square over all states s and actions a, comparing the true values Q*(s, a) computed in the previous section with the estimated values Q(s, a) computed by Sarsa. Plot the mean-squared error against Lamda. For Lamda = 0 and Lamda = 1 only, plot the learning curve of mean-squared error against episode number. __Value Function__- Initialise : zero__step-size__- alpha_t = 1/N(s_t, a_t)__epsilon-greedy exploration__- epsilon_t = N_0/(N_0 + N(s_t)), N_0 = 100 is a constantN(s): number of times that state __s__ has been visited. N(s, a): number of times that action __a__ has been selected from state s.__Lamda__- {0, 0.1, 0.2, ..., 1}__Rule__- Stop each run after 1000 episodes and report the mean-squared error__MSE__- Sigma_s,a(Q(s,a)-Q*(s,a))^square over all states s and actions a- comparing the true values Q*(s,a) computed in the previous section with estimated values Q(s,a) computed by Sarsa.__Optimal value function__ - V\*(s) = max_aQ\*(s, a) - computed bt Plot the mean-squared error against Lamda. For Lamda = 0 and Lamda = 1 only, plot the learning curve of mean-squared error against episode number. ###Code T_Q = Q_sa_dict T_Q def calc_epsilon(N_s): if N_s not in N_s_dict.keys(): N_s_dict[N_s] = 0 epsilon_t = N_0/(N_0 + N_s_dict[N_s]) return epsilon_t def calc_alpha(N_sa): alpha = 1 / N_sa return alpha def epsilonGreedy(pValue, dValue): epsilon = calc_epsilon((pValue, dValue)) if (pValue, dValue, 0) not in Q_sa_dict.keys(): Q_sa_dict[(pValue, dValue, 0)] = 0 if (pValue, dValue, 1) not in Q_sa_dict.keys(): Q_sa_dict[(pValue, dValue, 1)] = 0 max_action = np.argmax([Q_sa_dict[pValue, dValue, act] for act in actions]) #RandomPolicy if epsilon is 1: return np.random.choice(actions) # Exploitation if round(np.random.rand(), 2) > epsilon: return max_action # Explore else: if max_action: return 0 else: return 1 env = EasyEnv() episodes = 10000 lmds = np.arange(0,11)/10 mselamdas = np.zeros((len(lmds), episodes)) finalMSE = np.zeros(len(lmds)) N_0 = 100 actions = [0, 1] # 0 is Stick, 1 is Hit. for lamC, lmd in enumerate(lmds): N_s_dict = {} # Number of times that state s has been visited. N_sa_dict = {} # Number of times that action a has been selected from state s. Q_sa_dict = {} # Action-State Value V_s_dict = {} # State Value for i in range(1, 22): for j in range(1, 11): Q_sa_dict[(i, j, 0)] = 0 Q_sa_dict[(i, j, 1)] = 0 wins = 0 for episode in range(episodes): terminated = False E = {} env.initGame() pValue, dValue = env.get_state() action = epsilonGreedy(pValue, dValue) SA = [] reward = 0 while not terminated: pPrime, dPrime, reward, terminated = env.step(pValue, dValue, action) if not terminated: aPrime = epsilonGreedy(pPrime, dPrime) tdError = reward + Q_sa_dict[(pPrime, dPrime, aPrime)] - Q_sa_dict[pValue, dValue, action] else: tdError = reward - Q_sa_dict[(pValue, dValue, action)] if (pValue, dValue, action) not in E.keys(): E[(pValue, dValue, action)] = 0 E[(pValue, dValue, action)] += 1 if (pValue, dValue, action) not in N_sa_dict.keys(): N_sa_dict[(pValue, dValue, action)] = 1 else: N_sa_dict[(pValue, dValue, action)] += 1 if (pValue, dValue) not in N_s_dict.keys(): N_s_dict[(pValue, dValue)] = 0 N_s_dict[(pValue, dValue)] += 1 SA.append([pValue, dValue, action]) alpha = calc_alpha(N_sa_dict[(pValue, dValue, action)]) for (_p, _d, _a) in SA: Q_sa_dict[(_p, _d, _a)] += alpha*tdError*E[_p, _d, _a] E[_p, _d, _a] *= lmd if not terminated: pValue, dValue, action = pPrime, dPrime, aPrime if reward == 1: wins += 1 mse = np.sum(np.square(np.array(list(Q_sa_dict.values())) - np.array(list(T_Q.values())))) / (21*10*2) mselamdas[lamC, episode] = mse if (episode + 1) % 1000 == 0: print("Lamda=%.1f Episode %06d, MSE %5.3f, Wins %.3f"%(lmd, episode+1, mse, wins/(episode+1))) finalMSE[lamC] = mse mselamdas mselamdas[10] x = np.arange(0,10000) fig = plt.figure() ax1 = fig.add_subplot(2, 1, 1) ax2 = fig.add_subplot(2, 1, 2) fig.subplots_adjust(hspace=1, wspace=0.4) ax1.plot(x, mselamdas[0]) ax1.set_xlabel('Episode') ax1.set_ylabel('MSE') ax1.set_title('Lamda 0') ax2.plot(x, mselamdas[10]) ax2.set_xlabel('Episode') ax2.set_ylabel('MSE') ax2.set_title('Lamda 1') plt.show() ###Output _____no_output_____ ###Markdown 4 Linear Function ApproximationWe now Consider a simple value function approximator using coarse coding. use a binary vector pi(s, a) with 3\*6\*2 = 36 features. Each binary feature has a value of 1 iff (s,a) lies within the cuboid of state-space corresponding to that feature, and the action corresponding to that feature. The cuboids have the following overlapping intervals: dealer(s) = {[1,4],[4,7],[7,10]} player(s) = {[1,6],[4,9],[7,12],[10,15],[13,18],[16,21]} a = {hit, stick} where * dealer(s) is the value of the dealer's first card (1-10)* sum(s) is the sum of the player's cards (1-21)Repeat the Sarsa(Lamda) experiment from the previous section, but using linear value function approximation Q(s,a) = pi(s,a)^Ttheta. User a constant exploration of epsilon = 0.05 and a constant step-size of 0.01. Plot the mean-square error against Lamda. For Lamda = 0 and Lamda = 1 only, plot the learning curve of mean-squared error against episode number. __Value Function__ - Using linear value function approximation Q(s, a) = pi(s, a)^T\*theta.__Epsilon Greedy__ - Use a constant exploration of epsilon = 0.05__Ste-size__ - constant step-size of 0.01__Plot__ - Plot the mean-squared error against Lamda. - For Lamda = 0 and Lamda = 1 only, plot the learning curve of mean-squared error against episode number. ###Code def epsilonGreedy(pValue, dValue): epsilon = 0.05 if (pValue, dValue, 0) not in Q_sa_dict.keys(): Q_sa_dict[(pValue, dValue, 0)] = 0 if (pValue, dValue, 1) not in Q_sa_dict.keys(): Q_sa_dict[(pValue, dValue, 1)] = 0 max_action = np.argmax([Q_sa_dict[pValue, dValue, act] for act in actions]) # Exploitation if round(np.random.rand(), 2) > epsilon: return max_action # Explore else: if max_action: return 0 else: return 1 def features(pValue, dValue, action): feature = np.zeros(3*6*2) for fi, (lower, upper) in enumerate(zip(range(1,8,3), range(4, 11, 3))): feature[fi] = (lower <= dValue <= upper) for fi, (lower, upper) in enumerate(zip(range(1,17,3), range(6, 22, 3)), start=3): feature[fi] = (lower <= pValue <= upper) feature[-2] = 1 if action == 0 else 0 feature[-1] = 1 if action == 1 else 0 return feature.reshape(1, -1) def Q(pValue, dValue, action): return np.dot(features(pValue, dValue, action), theta) allFeatures = np.zeros((22, 11, 2, 3*6*2)) for p in range(1, 22): for d in range(1, 11): allFeatures[p-1, d-1, 0] = features(p, d, 0) allFeatures[p-1, d-1, 1] = features(p, d, 1) def allQ(): return np.dot(allFeatures.reshape(-1, 3*6*2), theta).reshape(-1) env = EasyEnv() episodes = 1000 lmds = np.arange(0,11)/10 mselamdas = np.zeros((len(lmds), episodes)) finalMSE = np.zeros(len(lmds)) T_Q = Q_sa_dict N_0 = 100 actions = [0, 1] # 0 is Stick, 1 is Hit. alpha = 0.01 for lamC, lmd in enumerate(lmds): theta = np.random.randn(3*6*2, 1) wins = 0 for episode in range(episodes): terminated = False E = np.zeros_like(theta) env.initGame() pValue, dValue = env.get_state() action = epsilonGreedy(pValue, dValue) reward = 0 while not terminated: pPrime, dPrime, reward, terminated = env.step(pValue, dValue, action) if not terminated: aPrime = epsilonGreedy(pPrime, dPrime) tdError = reward + Q(pPrime, dPrime, aPrime) - Q(pValue, dValue, action) else: tdError = reward - Q(pValue, dValue, action) E = lmd*E + features(pValue, dValue, action).reshape(-1, 1) gradient = alpha*tdError*E theta = theta + gradient if not terminated: pValue, dValue, action = pPrime, dPrime, aPrime if reward == 1: wins += 1 mse = np.sum(np.square(allQ() - np.array(list(T_Q.values())))) / (21*10*2) mselamdas[lamC, episode] = mse if (episode + 1) % 1000 == 0: print("Lamda=%.1f Episode %06d, MSE %5.3f, Wins %.3f"%(lmd, episode+1, mse, wins/(episode+1))) finalMSE[lamC] = mse np.array() allFeatures = np.zeros((22,11,2, 3*6*2)) ###Output _____no_output_____
doc/LectureNotes/_build/jupyter_execute/hw2.ipynb
###Markdown In class (the falling baseball example) we used an analytical expression for the height of a falling ball.In the first homework we used instead the position from experiment (Usain Bolt's 100m record run) and stored thisinformation with one-dimensional arrays in Python.Let us get some practice with this. The cell below creates two arrays,one containing the times to be analyzed and the other containing the $x$and $y$ components of the position vector at each point in time. This is a two-dimensional object. Thesecond array is initially empty. Then we define the initialposition to be $x=2$ and $y=1$. Take a look at the code and commentsto get an understanding of what is happening. Feel free to play around with it. ###Code tf = 4 #length of value to be analyzed dt = .001 # step sizes t = np.arange(0.0,tf,dt) # Creates an evenly spaced time array going from 0 to 3.999, with step sizes .001 p = np.zeros((len(t), 2)) # Creates an empty array of [x,y] arrays (our vectors). Array size is same as the one for time. p[0] = [2.0,1.0] # This sets the inital position to be x = 2 and y = 1 ###Output _____no_output_____ ###Markdown Below we are printing specific values of our array to see what is beingstored where. The first number in the array $r[]$ represents which arrayiteration we are looking at, while the number after the representswhich listed number in the array iteration we are getting back. ###Code print(p[0]) # Prints the first array print(p[0,:]) # Same as above, these commands are interchangeable print(p[3999]) # Prints the 4000th array print(p[0,0]) # Prints the first value of the first array print(p[0,1]) # Prints the second value of first array print(p[:,0]) # Prints the first value of all the arrays ###Output 1.0 [2. 0. 0. ... 0. 0. 0.] ###Markdown Then try running this cell. Notice how it gives an error since we did not implement a third dimension into our arrays ###Code print(p[:,2]) ###Output _____no_output_____ ###Markdown In the cell below we want to manipulate the arrays.In this example we make each vector's $x$ component valued the same as their respective vector's position in the iteration and the $y$ value will be twice that value, except for the first vector, which we have already set. That is we have $p[0] = [2,1], p[1] = [1,2], p[2] = [2,4], p[3] = [3,6], ...$Here we set up an array for $x$ and $y$ values. ###Code for i in range(1,3999): p[i] = [i,2*i] # Checker cell to make sure your code is performing correctly c = 0 for i in range(0,3999): if i == 0: if p[i,0] != 2.0: c += 1 if p[i,1] != 1.0: c += 1 else: if p[i,0] != 1.0*i: c += 1 if p[i,1] != 2.0*i: c += 1 if c == 0: print("Success!") else: print("There is an error in your code") ###Output _____no_output_____ ###Markdown You could also think of an alternative way of storing the above information. Feel free to explore how to storemultidimensional objects. Last week we studied Usain Bolt's 100m run and in class we studied a falling baseball. We made basic plots of the baseballmoving in one dimension. This week we will be working with a three-dimensional variant. This will be useful for our next homeworks and numerical projects. Assume we have a soccer ball moving in three dimensions with the following trajectory:1. $x(t) = 10t\cos{45^{\circ}} $2. $y(t) = 10t\sin{45^{\circ}} $3. $z(t) = 10t - \dfrac{9.81}{2}t^2$Now let us create a three-dimensional (3D) plot using these equations. In the cell belowwe write the equations into their respective labels. We fix a final time in the code below.Important Concept: Numpy comes with many mathematical packages, someof them being the trigonometric functions sine, cosine, tangent. Weare going to utilize these this week. Additionally, these functionswork with radians, so we will also be using a function from Numpy thatconverts degrees to radians. ###Code tf = 2.04 # The final time to be evaluated dt = 0.1 # The time step size t = np.arange(0,tf,dt) # The time array theta_deg = 45 # Degrees theta_rad = np.radians(theta_deg) # Converts degrees to their radian counterparts x = 10*t*np.cos(theta_rad) # Equation for our x component, utilizing np.cos() and our calculated radians y = 10*t*np.sin(theta_rad) # Put the y equation here z = 10*t-9.81/2*t**2# Put the z equation here ###Output _____no_output_____ ###Markdown Then we plot it ###Code ## Once you have entered the proper equations in the cell above, run this cell to plot in 3D fig = plt.axes(projection='3d') fig.set_xlabel('x') fig.set_ylabel('y') fig.set_zlabel('z') fig.scatter(x,y,z) ###Output _____no_output_____ ###Markdown * 6a (8pt) How would you express $x(t)$, $y(t)$, $z(t)$ for this problem as a single vector, $\boldsymbol{r}(t)$?Then run the code and plot using the array $r$ ###Code ## Run this code to plot using our r array fig = plt.axes(projection='3d') fig.set_xlabel('x') fig.set_ylabel('y') fig.set_zlabel('z') fig.scatter(r[0],r[1],r[2]) ###Output _____no_output_____ ###Markdown <!-- HTML file automatically generated from DocOnce source (https://github.com/doconce/doconce/)doconce format html hw2.do.txt --no_mako --> PHY321: Classical Mechanics 1**Homework 2, due January 28 (Midnight)**Date: **Jan 18, 2022** Practicalities about homeworks and projects1. You can work in groups (optimal groups are often 2-3 people) or by yourself. If you work as a group you can hand in one answer only if you wish. **Remember to write your name(s)**!2. Homeworks are available 10 days before the deadline.3. How do I(we) hand in? You can hand in the paper and pencil exercises as a scanned document. For this homework this applies to exercises 1-5. The scanned document should be uploaded to D2L. Alternatively, you can hand in everyhting (if you are ok with typing mathematical formulae using say Latex) as a jupyter notebook at D2L. The numerical exercise(s) (exercise 6 here) should always be handed in as a jupyter notebook by the deadline at D2L. Exercise 1 (10 pt), Forces, discussion questions, test your intuition* 1a (2pt) Single force. Can an object affected only by a single force have zero acceleration?* 1b (2pt) Zero velocity. If you throw a ball vertically it has zero velocity at its maximum point. Does it also have zero acceleration at this point?* 1c (3pt) Acceleration of gravity. You measure the acceleration of gravity in an elevator moving at a velocity of 9.8m/s downwards. What will you measure?* 1d (3pt) Air resistance. You throw a ball straight up and measure the velocity as it passes you on its way down. Will the velocity be larger, the same, or smaller if you did the same experiment in vacuum? Exercise 2 (10 pt), setting up forces, Newton's second lawUseful material here to read is1. Taylor chapters 1.3 and 1.4 and2. Malthe-Sørenssen chapters 5.1, 5.2 and 5.3A person jumps from an airplane, falling freely for several seconds before the person pulls the cord of her parachute and the parachute unfolds.* 2a (3pt) Identify the forces acting on the parachuter and draw a free-body diagram of the parachuter before the person has pulled the cord.* 2b (3pt) Identify the forces acting on the parachuter and draw a free-body diagram of the parachuter after the person has pulled the cord.* 2c (4pt) Sketch the net force acting on the parachuter as a function of time, F(t). Exercise 3 (10 pt), Space shuttle with air resistanceUseful material here to read is1. Malthe-Sørenssen chapters 5.1, 5.2 and 5.3During lift-off of the space shuttle the engines provide a force of $35\times 10^{6}$ N. The mass of the shuttle is approximately$2\times 10^6$ kg.* 3a (3pt) Draw a free-body diagram of the space shuttle immediately after lift-off.* 3b (3pt) Find an expression for the acceleration of the space shuttle immediately after lift-off.Let us assume that the force from the engines is constant, and that the mass of thespace shuttle does not change significantly over the first 20 s.* 3c (4pt) Find the velocity and position of the space shuttle after 20 s if you ignore air resistance. Exercise 4 (15 pt), now hitting a golf ballUseful material here to read is1. Taylor chapters 1.3-1.6 and2. Malthe-Sørenssen chapter 6.3-6.4 and 7.1-7.3**Taylor exercise 1.35**. The formulae you obtain here will be useful for the numerical exercises below (see exercise 6 below). Exercise 5 (15 pt), hitting a puck insteadTaylor exercise 1.38. Exercise 6 (40pt), Numerical elements, moving to more than one dimension**This exercise should be handed in as a jupyter-notebook** at D2L. Remember to write your name(s). Last week we:1. Analytically mapped 1D motion over some time2. Gained practice with functions3. Reviewed vectors and matrices in PythonThis week we will:1. Practice using Python syntax and variable manipulation2. Utilize analytical solutions to create more refined functions3. Work in two, three or even higher dimensionsThis material will then serve as background for the numerical part of homework 3. The first part is a simple warm-up, with hints and suggestions you can use for the code to write below. ###Code %matplotlib inline # As usual, here are some useful packages we will be using. Feel free to use more and experiment as you wish. import numpy as np import matplotlib.pyplot as plt from mpl_toolkits import mplot3d %matplotlib inline ###Output _____no_output_____
Sage4HS/02-Sequences-and-Series.ipynb
###Markdown 數列與級數(Sequences and Series) ![Creative Commons License](https://i.creativecommons.org/l/by/4.0/88x31.png)This work by Jephian Lin is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/). _Tested on SageMath version 8.7_ 數列一個**數列**指的是一連串的數字 $a_1,a_2,\ldots,a_n$ 在 Sage 中,我們可以用**列表**(list)來紀錄一個數列 一個列表由一組中括號和一些逗點組成 `[a1, a2, ..., an]` ###Code seq = [1,2,3,4,5] ###Output _____no_output_____ ###Markdown 如果 `seq` 是一個列表 可以用 `seq[i]` 來叫出 `seq` 中的第 `i` 個元素 但注意在程式設計中,元素是從 0 開始數 ###Code seq[2] ###Output _____no_output_____ ###Markdown Sage 中可以用 `range(n)` 來叫出 `[0, 1, ..., n-1]` 這個列表(`n` 不在裡面) ###Code seq = range(10) seq ###Output _____no_output_____ ###Markdown 也可以用 `range(a,b)` 來叫出 `[a, a+1, ..., b-1]` 這個列表(`b` 不在裡面) ###Code seq = range(3,10) seq ###Output _____no_output_____ ###Markdown 迴圈電腦擅長做重覆且類似的事情 一個**迴圈**(loop)可以 對列表中的所有元素 做相同的事情```Pythonfor element in some_list: do something``` ###Code seq = [2,3,5,7,11] for p in seq: print('%s is a prime number'%p) ###Output 2 is a prime number 3 is a prime number 5 is a prime number 7 is a prime number 11 is a prime number ###Markdown 配合一些 `if` 的判斷式 可以讓迴圈更加靈活 ###Code for i in range(1,101): if i%13 == 1 or i%17 == 1: print(i) ###Output 1 14 18 27 35 40 52 53 66 69 79 86 92 ###Markdown 等差數列一個**等差數列** $a_1, a_2,\ldots, a_n$ 中的每一項都符合 $a_{k+1}=a_k+d$ 的條件其中 $a_1$ 叫做**首項** 而 $d$ 叫做**公差** 比如說 首項為 5 公差為 2 ###Code a = 5 d = 2 ###Output _____no_output_____ ###Markdown 當執行 `a = a + d` 時 意思是將 `a` 更新成 `a + d` (所以將下方的程式跑 9 次就會得到 $a_{10}$) ###Code a = a + d a ###Output _____no_output_____ ###Markdown 利用迴圈讓事情變得更簡單 ###Code a = 5 d = 2 print(1, a) for i in range(2,11): a = a + d print(i, a) ###Output (1, 5) (2, 7) (3, 9) (4, 11) (5, 13) (6, 15) (7, 17) (8, 19) (9, 21) (10, 23) ###Markdown 但數學理論常常能給出簡潔的答案 $a_n = a_1 + (n-1)d$ ###Code a = 5 d = 2 a10 = a + (10 - 1)*d a10 ###Output _____no_output_____ ###Markdown 等比數列一個**等比數列** $a_1, a_2,\ldots, a_n$ 中的每一項都符合 $a_{k+1}=a_k\times r$ 的條件其中 $a_1$ 叫做**首項** 而 $r$ 叫做**公比** 比如說 首項為 5 公比為 2 ###Code a = 5 r = 2 ###Output _____no_output_____ ###Markdown 當執行 `a = a * d` 時 意思是將 `a` 更新成 `a * d` (所以將下方的程式跑 9 次就會得到 $a_{10}$) ###Code a = a * r a ###Output _____no_output_____ ###Markdown 同樣可以用迴圈來處理 ###Code a = 5 r = 2 print(1, a) for i in range(2,11): a = a * r print(i, a) ###Output (1, 5) (2, 10) (3, 20) (4, 40) (5, 80) (6, 160) (7, 320) (8, 640) (9, 1280) (10, 2560) ###Markdown 等比數列的第 $n$ 項為 $a_n = a_1 \times r^{n-1}$ ###Code a = 5 r = 2 a10 = a * r^(10-1) a10 ###Output _____no_output_____ ###Markdown 級數一個**級數**指的是一連串數字的和 $a_1+a_2+\cdots +a_n$ `sum` 函數可以計算列表中所有元素的總和 ###Code seq = [1,2,3,4,5] sum(seq) ###Output _____no_output_____ ###Markdown 也可以利用迴圈來計算總和:設定 `total = 0` 每次把各個元素加進去 `total = total + i` ###Code seq = [1,2,3,4,5] total = 0 for i in seq: total = total + i total ###Output _____no_output_____ ###Markdown 用迴圈來計算等差級數 ###Code a = 5 d = 2 total = a print(1, a, total) for i in range(2,11): a = a + d total = total + a print(i, a, total) ###Output (1, 5, 5) (2, 7, 12) (3, 9, 21) (4, 11, 32) (5, 13, 45) (6, 15, 60) (7, 17, 77) (8, 19, 96) (9, 21, 117) (10, 23, 140) ###Markdown 首項為 $a_1$ 而公差為 $d$ 的等差級數為 $a_1+\cdots +a_n=\frac{(a_1+a_n)\times n}{2} = \frac{(a_1+a_1+(n-1)d)\times n}{2}$ 算出來答案應該要一樣 ###Code a = 5 d = 2 (a + a + (10 - 1)*d) * 10 / 2 ###Output _____no_output_____ ###Markdown 用迴圈來計算等比級數 ###Code a = 5 r = 2 total = a print 1, a, total for i in range(2,11): a = a * r total = total + a print i, a, total ###Output 1 5 5 2 10 15 3 20 35 4 40 75 5 80 155 6 160 315 7 320 635 8 640 1275 9 1280 2555 10 2560 5115 ###Markdown 首項為 $a_1$ 而公比為 $r$($r\neq 1$)的等比級數為 $a_1+\cdots +a_n=a_1\times \frac{1-r^n}{1-r}$ 若 $r=1$ 則 $a_1+\cdots +a_n=a_1+\cdots +a_1=na_1$ 算出來答案應該要一樣 ###Code a = 5 r = 2 a * ((1-r^10) / (1-r)) ###Output _____no_output_____ ###Markdown 列表推導式(list comprehension)數學中的集合可以用條件來組成 比如說 $\{x^2: 1\leq x\leq 100, x\text{ is prime}\}$Sage 中的列表中也可以做類似的事情 `[x^2 for x in range(1,101) if is_prime(x)]` ###Code seq = [2*k for k in range(1,11)] seq seq = [k^2 for k in range(1,11)] seq ###Output _____no_output_____ ###Markdown 配合 `sum` 函數來計算列表的總和 ###Code seq = [2*k for k in range(1,11)] sum(seq) seq = [k^2 for k in range(1,11)] sum(seq) ###Output _____no_output_____ ###Markdown 加上 `if` 判斷式 ###Code seq = [2*k for k in range(1,11) if k%2 == 0] sum(seq) seq = [k^2 for k in range(1,11) if k%2 == 0] sum(seq) n = 10000 counter = 0 for i in range(n): if Monty_Hall_game(): counter = counter + 1 N(counter / n) ###Output _____no_output_____ ###Markdown 費波那契數列**費式數列**符合以下的遞迴關係式: $F_0 = 0$ $F_1 = 1$ $F_n = F_{n-1} + F_{n-2}$ for all $n\geq 2$ 如何計算第 $n$ 項? ###Code F = [0,1] for n in range(2,11): F.append(F[n-1] + F[n-2]) for n in range(11): print("F%s = %s"%(n, F[n])) ###Output F0 = 0 F1 = 1 F2 = 1 F3 = 2 F4 = 3 F5 = 5 F6 = 8 F7 = 13 F8 = 21 F9 = 34 F10 = 55 ###Markdown 實際上費式數列有一般式,但不見得比較好算: $F_n = \frac{1}{\sqrt{5}}\left[\left(\frac{1+\sqrt{5}}{2}\right)^n - \left(\frac{1-\sqrt{5}}{2}\right)^n\right]$想想怎麼找到一般式的? ###Code for n in range(11): Fn = 1/sqrt(5) * ( ( 0.5*(1+sqrt(5)) )^n - ( 0.5*(1-sqrt(5)) )^n ) print("F%s = %s"%(n, N(Fn))) ###Output F0 = 0.000000000000000 F1 = 1.00000000000000 F2 = 1.00000000000000 F3 = 2.00000000000000 F4 = 3.00000000000000 F5 = 5.00000000000000 F6 = 8.00000000000000 F7 = 13.0000000000000 F8 = 21.0000000000000 F9 = 34.0000000000000 F10 = 55.0000000000000 ###Markdown 費式數列連續兩項的比值,會趨近到黃金比例 $1.61803398875\cdots$ ###Code F = [0,1] for n in range(2,11): F.append(F[n-1] + F[n-2]) for n in range(2,11): print("F%s/F%s = %s"%(n, n-1, N(F[n]/F[n-1]))) ###Output F2/F1 = 1.00000000000000 F3/F2 = 2.00000000000000 F4/F3 = 1.50000000000000 F5/F4 = 1.66666666666667 F6/F5 = 1.60000000000000 F7/F6 = 1.62500000000000 F8/F7 = 1.61538461538462 F9/F8 = 1.61904761904762 F10/F9 = 1.61764705882353 ###Markdown 費式數列的平方和,會是連續兩項的乘積: ###Code F = [0,1] for n in range(2,11): F.append(F[n-1] + F[n-2]) for n in range(1,11): square_sum = sum(num^2 for num in F[:n+1]) print("F%s^2 + ... + F%s^2 = %s = F%s*F%s"%(0, n, square_sum, n, n+1)) ###Output F0^2 + ... + F1^2 = 1 = F1*F2 F0^2 + ... + F2^2 = 2 = F2*F3 F0^2 + ... + F3^2 = 6 = F3*F4 F0^2 + ... + F4^2 = 15 = F4*F5 F0^2 + ... + F5^2 = 40 = F5*F6 F0^2 + ... + F6^2 = 104 = F6*F7 F0^2 + ... + F7^2 = 273 = F7*F8 F0^2 + ... + F8^2 = 714 = F8*F9 F0^2 + ... + F9^2 = 1870 = F9*F10 F0^2 + ... + F10^2 = 4895 = F10*F11 ###Markdown 動手試試看 練習 1如果 `seq` 是一個列表, 我們可以用 `seq[i]` 來叫出第 `i` 個元素。 實際上 `i` 也可以是一個負數。 試試看當 `seq = [1,2,3,4,5]` 時, `seq[-1]` 是什麼 。 ###Code ### your answer here ###Output _____no_output_____ ###Markdown 練習 2我們也可以取出列表的片段。 試試看當 `seq = [1,2,3,4,5]` 時, `seq[2:4]` 是什麼。 ###Code ### your answer here ###Output _____no_output_____ ###Markdown 練習 3列表和列表可以相加。 試試看 `[1,2,3]+[4,5,6]` 是什麼。 ###Code ### your answer here ###Output _____no_output_____ ###Markdown 練習 4定義一個函數 `rotate` 其功能為: 輸入一個列表 `seq` 以及一個整數 `k`, 輸出一個新的列表, 其內容為將 `seq` 的元素往右推 `k` 格, 並將最右邊的元素補在左邊。 比如說: 當 `seq = [1,2,3,4,5]` 而 `k = 2`, 則回傳的列表為 `[4,5,1,2,3]`。 ###Code ### your answer here ###Output _____no_output_____ ###Markdown 練習 5計算 1 到 1000 中質數的個數。 ###Code ### your answer here ###Output _____no_output_____ ###Markdown 練習 6計算 1 到 1000 中, 有幾個數字是 2 的倍數、也是 3 的倍數、 但不是 5 的倍數。 ###Code ### your answer here ###Output _____no_output_____ ###Markdown 練習 7計算 1 到 1000 中質數的總和。 ###Code ### your answer here ###Output _____no_output_____ ###Markdown 練習 8在 1 到 1000 中, 有些數字是 2 的倍數、也是 3 的倍數、 但不是 5 的倍數。 計算這些數字的總和。 ###Code ### your answer here ###Output _____no_output_____ ###Markdown 練習 9利用列表推導式建立一個列表, 其中包含 1 到 1000 中的所有質數。 ###Code ### your answer here ###Output _____no_output_____ ###Markdown 練習 10利用列表推導式建立一個列表, 其中包含 1 到 1000 中所有 是 2 的倍數、也是 3 的倍數、 但不是 5 的倍數的數字。 ###Code ### your answer here ###Output _____no_output_____ ###Markdown 練習 11如果 `seq` 是一個列表, 則 `seq.append(k)` 會在列表最後面增加一個元素 `k`。費波那契數列的前幾項為 $a_0=0$, $a_1=1$, $a_2=1$ 且符合遞迴關係式 $a_n = a_{n-1}+a_{n-2}$。 建立一個列表 `a` 來記錄費波那契數列的 0 到 99 項。 ###Code ### your answer here ###Output _____no_output_____ ###Markdown 練習 12從 1 到 1000 中, 除以 13 餘 3、 除以 17 餘 5、 除以 19 餘 10 的數字有幾個。 ###Code ### your answer here ###Output _____no_output_____ ###Markdown 練習 13從 1 到 1000 中, 除以 13 餘 3、 除以 17 餘 5、 除以 19 餘 10 的數字總和是多少。 ###Code ### your answer here ###Output _____no_output_____
docs/source/examples/geochem/lambdas_dimreduction.ipynb
###Markdown lambdas: Dimensional Reduction===============================Orthogonal polynomial decomposition can be used for dimensional reduction of smoothfunction over an independent variable, producing an array of independent valuesrepresenting the relative weights for each order of component polynomial.In geochemistry, the most applicable use case is for reduction Rare Earth Element (REE)profiles. The REE are a collection of elements with broadly similar physicochemicalproperties (the lanthanides), which vary with ionic radii. Given their similar behaviourand typically smooth function of normalised abundance vs. ionic radii, the REE profilesand their shapes can be effectively parameterised and dimensionally reduced (14 elementssummarised by 3-4 shape parameters).Here we generate some example data, reduce these to lambda values, and plot theresulting dimensionally reduced data. ###Code import numpy as np import pandas as pd from pathlib import Path import matplotlib.pyplot as plt from pyrolite.geochem.ind import REE, get_ionic_radii from pyrolite.plot.spider import REE_v_radii from pyrolite.util.math import lambdas, lambda_poly_func, OP_constants np.random.seed(82) ###Output _____no_output_____ ###Markdown First we'll generate some example data: ###Code no_analyses = 1000 data_ree = REE(dropPm=True) data_radii = np.array(get_ionic_radii(data_ree, charge=3, coordination=8)) data_radii = np.tile(data_radii, (1, no_analyses)).reshape( no_analyses, data_radii.shape[0] ) noise = np.random.randn(*data_radii.shape) * 0.1 constant = -0.1 lin = np.tile(np.linspace(3.0, 0.0, data_radii.shape[1]), (no_analyses, 1)) lin = (lin.T * (1.1 + 0.4 * np.random.rand(data_radii.shape[0]))).T quad = -1.2 * (data_radii - 1.11) ** 2.0 lnY = noise + constant + lin + quad for ix, el in enumerate(data_ree): if el in ["Ce", "Eu"]: lnY[:, ix] += np.random.rand(no_analyses) * 0.6 df = pd.DataFrame(np.exp(lnY), columns=data_ree) ax = df.pyroplot.REE( marker="D", alpha=0.01, c="0.5", markerfacecolor="k", markeredgecolor="k", index="elements", ) plt.show() ###Output _____no_output_____ ###Markdown From this data we can calculate and plot the lambda values: ###Code ls = df.pyrochem.lambda_lnREE( exclude=["Ce", "Eu", "Pm"], degree=4, norm_to="Chondrite_PON" ) fig, ax = plt.subplots(1, 3, figsize=(9, 3)) ax_labels = ls.columns for ix in range(ls.columns.size - 1): l1, l2 = ax_labels[ix], ax_labels[ix + 1] ax[ix].scatter(ls[l1], ls[l2], alpha=0.1, c="k") ax[ix].set_xlabel(l1) ax[ix].set_ylabel(l2) plt.tight_layout() fig.suptitle("lambdas for Dimensional Reduction", y=1.05) ###Output _____no_output_____
analysis/jupyter/spice/dataset_factory.ipynb
###Markdown Dataset FactoryNotebook for using the xrfuncs module to combine spice-2 simlulation results into single datasets for analysis. ###Code %matplotlib tk import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl mpl.rcParams['text.usetex'] = False mpl.rcParams['font.size'] = 14 import xarray as xr import pandas as pd import scipy.io as sio import sys import os import glob import copy import pathlib as pth import importlib import math sys.path.append('/home/jleland/Coding/Projects/flopter') import flopter.spice.splopter as spl import flopter.spice.tdata as td import flopter.core.ivdata as iv import flopter.core.fitters as fts import flopter.core.fitdata as fd import flopter.core.lputils as lpu import flopter.core.constants as c import flopter.spice.inputparser as inp import flopter.spice.normalise as nrm import flopter.spice.utils as spu import flopter.spice.xrfuncs as xrf ###Output _____no_output_____ ###Markdown Tool for globbing together the run directories you want to combineThis ###Code spice_dir = pth.Path('/home/jleland/data/external_big/spice/') os.chdir(spice_dir) skippable_runs = set([ 'marconi/spice2/sheath_exp_hg/flat_flush_gapfill/alpha_yz_-6.0', # accidental duplicate 'marconi/spice2/sheath_exp_hg/angled_recessed_as/alpha_yz_-2.0', 'marconi/spice2/sheath_exp_hg/angled_recessed_as/alpha_yz_-3.0', 'marconi/spice2/sheath_exp_hg_fflwp/angled_recessed_as/alpha_yz_-2.0', 'marconi/spice2/sheath_exp_hg_fflwp/angled_recessed_as/alpha_yz_-3.0', 'marconi/spice2/sheath_exp_hg_fflwp/angled_recessed_as/alpha_yz_-4.0', 'marconi/spice2/sheath_exp_hg_fflwp/angled_recessed_as/alpha_yz_-5.0', 'marconi/spice2/sheath_exp_hg_fflwp/angled_recessed_as/alpha_yz_-6.0', 'marconi/spice2/sheath_exp_hg_fflwp/flat_flush_as/alpha_yz_-2.0', 'marconi/spice2/sheath_exp_fflwp/angled_recessed_as/alpha_yz_-2.0', 'marconi/spice2/sheath_exp_fwp/flat_flush_wp-2_as_1_/alpha_yz_-2.0', 'marconi/spice2/bergmann_bm/flat_flush_lowas/alpha_yz_-1.5', # 'marconi/spice2/shexp_shad_min/flat_flush_as/alpha_yz_-4.0', # unfinished 'marconi/spice2/shexp_shad_fwp0/angled_recessed_as/alpha_yz_-4.0', 'marconi/spice2/shexp_shad_fwp0/flat_flush_as/alpha_yz_-4.0' ]) skippable_scans = set() single_sims = set() if 1 == 0: sr_sorted = list(skippable_runs) sr_sorted.sort() for skippable_run in sr_sorted: backup_dir = list(pth.Path(skippable_run).glob('backup*'))[-1] # print(backup_dir/'log.out') print(f'{skippable_run}:') print(f'\t{spl.Splopter.get_h_remaining_lines(backup_dir/"log.out")[-1]}\n') non_standard_variables = {'t', 'ProbePot', 'npartproc', 'Nz', 'Nzmax', 'Ny', 'count', 'Npc', 'snumber', 'nproc'} desired_variables = (td.DEFAULT_REDUCED_DATASET | non_standard_variables) - {td.OBJECTSCURRENTFLUXE, td.OBJECTSCURRENTFLUXI} # scans_searchstr = '*/*/sheath_exp/*' # scans_searchstr = '*/*/sheath_exp_fwp/*' scans_searchstr = [ # '*/*/sheath_exp_hg/angled_recessed_as', # '*/*/sheath_exp_hg/flat_flush*', # '*/*/sheath_exp_hg/*', # '*/*/sheath_exp_hg_fflwp/*' # '*/*/sheath_exp_fflwp/*' # '*/*/sheath_exp_fwp/*wp-2*', # '*/*/sheath_exp_fwp/flat_flush_as' # '*/*/bergmann_bm/*' '*/*/shexp_shad_fflwp*/*', '*/*/shexp_shad_min/*', # '*/*/shexp_shad_fwp0/*', ] # disallowed_angles = ['-2.0', '-3.0', '-4.0', '-5.0', '-6.0'] disallowed_angles = ['-2.0', '-3.0'] scans, all_run_dirs = xrf.get_run_dirs(scans_searchstr, skippable_runs=skippable_runs, disallowed_angles=disallowed_angles) ###Output [0]: marconi/spice2/shexp_shad_fflwp/angled_recessed_as [0,0]: angled_recessed_as/alpha_yz_-11.0 [0,1]: angled_recessed_as/alpha_yz_-12.0 [0,2]: angled_recessed_as/alpha_yz_-14.0 [0,3]: angled_recessed_as/alpha_yz_-16.0 [0,4]: angled_recessed_as/alpha_yz_-18.0 [0,5]: angled_recessed_as/alpha_yz_-20.0 [0,6]: angled_recessed_as/alpha_yz_-25.0 [0,7]: angled_recessed_as/alpha_yz_-30.0 [0,8]: angled_recessed_as/alpha_yz_-5.0 [0,9]: angled_recessed_as/alpha_yz_-7.0 [0,10]: angled_recessed_as/alpha_yz_-8.0 [0,11]: angled_recessed_as/alpha_yz_-9.0 [1]: marconi/spice2/shexp_shad_fflwp/flat_flush_as [1,0]: flat_flush_as/alpha_yz_-11.0 [1,1]: flat_flush_as/alpha_yz_-12.0 [1,2]: flat_flush_as/alpha_yz_-14.0 [1,3]: flat_flush_as/alpha_yz_-16.0 [1,4]: flat_flush_as/alpha_yz_-18.0 [1,5]: flat_flush_as/alpha_yz_-20.0 [1,6]: flat_flush_as/alpha_yz_-25.0 [1,7]: flat_flush_as/alpha_yz_-30.0 [1,8]: flat_flush_as/alpha_yz_-5.0 [1,9]: flat_flush_as/alpha_yz_-7.0 [1,10]: flat_flush_as/alpha_yz_-8.0 [1,11]: flat_flush_as/alpha_yz_-9.0 [2]: marconi/spice2/shexp_shad_min/angled_recessed_as [2,0]: angled_recessed_as/alpha_yz_-10.0 [2,1]: angled_recessed_as/alpha_yz_-4.0 [2,2]: angled_recessed_as/alpha_yz_-6.0 [3]: marconi/spice2/shexp_shad_min/flat_flush_as [3,0]: flat_flush_as/alpha_yz_-10.0 [3,1]: flat_flush_as/alpha_yz_-4.0 [3,2]: flat_flush_as/alpha_yz_-6.0 ###Markdown The function itself ###Code importlib.reload(xrf) datasets, probes, thetas = xrf.create_scan_probe_datasets(scans, all_run_dirs) datasets.keys() ###Output _____no_output_____ ###Markdown Combining together the individual datasetsThis has been implemented as a do-all function, done by combining all groups (i.e. folders in bin/data/) as datasets 2D in probe name (i.e. angled_recessed_...) and theta. These can then be further combined if desired. ###Code ## DO NOT USE! These have now been implemented in xrfuncs and so are obsolete. probe_theta_ps = { 'angled':10.0, 'flat':0.0, 'semi-angled':5.0, } probe_recessions = { 'recessed': 1.0e-3, 'semi-recessed': 0.5e-3, 'flush': 0.0e-3, } def combine_1d_dataset(probe_name, datasets, concat_dim='theta', theta_p='auto', recession='auto'): combined_ds = xr.concat(datasets[probe_name], dim=concat_dim).sortby(concat_dim) if theta_p == 'auto': theta_p = probe_theta_ps[probe_name.split('_')[0]] if recession == 'auto': recession = probe_recessions[probe_name.split('_')[1]] gap = 0.0 if 'gapless' in probe_name else 1.0e-3 combined_ds = combined_ds.assign_coords( recession=recession, gap=gap, theta_p=theta_p, theta_p_rads=np.radians(theta_p), theta_rads=np.radians(combined_ds.theta) ) return combined_ds def combine_2d_dataset(probe_names, datasets, concat_dim='probe', ): c1d_datasets = [combine_1d_dataset(probe_name, datasets) for probe_name in probe_names] probe_da = xr.DataArray(probe_names, dims='probe', coords={'probe': probe_names}) return xr.concat(c1d_datasets, dim=probe_da).drop(None) probes combined_ds = xrf.combine_1d_dataset('flat_flush', datasets) combineder_ds = xrf.combine_2d_dataset(probes, datasets, extra_dims={'run':'hg'}) # combineder_ds.sel(probe='angled_recessed') combined_ds fig, ax = plt.subplots(3, sharex=True, figsize=[8,8]) # fig = plt.figure(figsize=[8,8]) plot_ds = combineder_ds.sel(probe='flat_flush', voltage=slice(-15,None)).set_coords('voltage_corr') #.swap_dims({'voltage':'voltage_corr'}) #, theta=[4.0, 6.0, 8.0, 12.0]) plot_ds.current.plot(hue='theta', x='voltage_corr', ax=ax[0]) plot_ds.current_e.plot(hue='theta', x='voltage_corr', ax=ax[1]) plot_ds.current_i.plot(hue='theta', x='voltage_corr', ax=ax[2]) # .current.plot.line(hue='theta', x='voltage', ax=ax), col=['current', 'current_e', 'current_i'] for axis in ax: axis.get_legend().remove() fig.tight_layout() combined_ds.v_f.plot.line(x='theta') combined_ds.ion_I_sat.plot.line(x='theta') fig, ax = plt.subplots(2) dummy_theta = np.linspace(2, 45.0, 5000) for i, probe in enumerate(combineder_ds.probe.values): plot_ds = combineder_ds.sel(probe=probe, run='hg') ax[i].errorbar(plot_ds['theta_p']+plot_ds['theta'], plot_ds['ion_a'], yerr=plot_ds['ion_d_a'], fmt='.') calced_a = lpu.calc_new_sheath_expansion_param( 5.0, 1e18, 5e-3, 1e-3, np.radians(dummy_theta), plot_ds.recession.values, plot_ds.theta_p_rads.values, # c_1=0.5, c_2=0.6, c_1=0.9, c_2=0.6, # c_1=1.4, c_2=0.39, # from hg-theta=15-30 # c_1=2.0, c_2=0.14, # from hg-theta=11-30 ) ax[i].errorbar(dummy_theta, calced_a, label=r'Predicted - $\theta_{large}$', fmt='-', linewidth=0.8, alpha=0.6) ax[i].set_ylim(0,0.15) combined_ds['theta_p'] = 10.0 combined_ds = combined_ds.assign_coords( theta_p_rads=np.radians(combined_ds.theta_p), theta_rads=np.radians(combined_ds.theta) ) combined_ds.to_netcdf('sheath_exp_hg_ar_ivs.nc') ###Output _____no_output_____ ###Markdown Combine several groups together through ###Code # group_name_searchstrings = { # 'hg': ['*/*/sheath_exp_hg/*'], # 'hg_fflwp': ['*/*/sheath_exp_hg_fflwp/*'], # 'fwp_2': ['*/*/sheath_exp_fwp/*wp-2*'], # 'fwp_0': ['*/*/sheath_exp_fwp/*_as'], # 'fflwp': ['*/*/sheath_exp_fflwp/*'], # # 'old': ['*/*/sheath_exp'], # # 'new': ['*/*/new_sheath_exp'], # # 'bbm': ['*/*/bergmann_bm/*'], # } group_name_searchstrings = { 'fwp_0': ['*/*/shexp_shad_fwp0/*'], 'fflwp': ['*/*/shexp_shad_fflwp/*', '*/*/shexp_shad_min/*'], # 'old': '*/*/sheath_exp', # 'new': '*/*/new_sheath_exp', # 'bbm': '*/*/bergmann_bm/*', } for group, searchstr in group_name_searchstrings.items(): print(f'{group}:{searchstr}') scans, all_run_dirs = xrf.get_run_dirs(searchstr, skippable_runs=skippable_runs, disallowed_angles=disallowed_angles) run_long_analysis_fl = False if run_long_analysis_fl: datasets = [] for group, searchstr in group_name_searchstrings.items(): print(f'{group}:{searchstr}') scans, all_run_dirs = xrf.get_run_dirs(searchstr, skippable_runs=skippable_runs, disallowed_angles=disallowed_angles, print_fl=False) combined_ds = xrf.create_scan_dataset(scans, all_run_dirs, extra_dims={'run':group}) datasets.append(combined_ds) # datasets_dir = pth.Path('/home/jleland/data/external_big/spice/sheath_exp_datasets') # datasets_dir = pth.Path('/home/jleland/data/external_big/spice/sheath_exp_datasets/10V_cap') datasets_dir = pth.Path('/home/jleland/data/external_big/spice/shexp_datasets') os.chdir(datasets_dir) # A couple of lines to add 'probe' to the datasets that were missing them (as they were 1d) # This will no longer be necessary # datasets[2] = datasets[2].expand_dims(dim=['probe']).assign_coords(probe=['flat_flush']) # datasets[2] # datasets[5] = datasets[5].expand_dims(dim=['probe']).assign_coords(probe=['flat_flush_bbm']) # datasets[5] for ds in datasets: run = ds.run.values[0] print(run) ds.to_netcdf(f'se_{run}_ivs.nc') for i, ds in enumerate(datasets): run = ds.run.values[0] print(f'[{i}]: {run}') print(ds.dims) combined_ds = xr.concat(datasets, dim='run') combined_ds combined_ds.sel(run='hg_fflwp', probe='flat_flush')['ion_voltage_max'].values fig, ax = plt.subplots(2) combined_ds.sel(run=['fflwp', 'fwp_0'], probe='angled_recessed')['ion_a'].plot(x='theta', hue='run', marker='s', mfc='none', ax=ax[0]) combined_ds.sel(run=['fflwp', 'fwp_0'], probe='flat_flush')['str_iv_a'].plot(x='theta', hue='run', marker='s', ax=ax[1]) fig, ax = plt.subplots(2) combined_ds.sel(run=['fflwp', 'fwp_2', 'fwp_0'], probe='flat_flush')['ion_a'].plot(x='theta', hue='run', marker='s', ax=ax[0]) combined_ds.sel(run=['fflwp', 'fwp_2', 'fwp_0'], probe='angled_recessed')['ion_a'].plot(x='theta', hue='run', marker='s', ax=ax[1]) fig, ax = plt.subplots(2) combined_ds.sel(run=['hg_fflwp', 'hg'], probe='flat_flush')['ion_a'].plot(x='theta', hue='run', marker='s', ax=ax[0]) combined_ds.sel(run=['hg_fflwp', 'hg'], probe='angled_recessed')['ion_a'].plot(x='theta', hue='run', marker='s', ax=ax[1]) combined_ds.sel(run='fflwp', probe='angled_recessed', theta=slice(10,30))['current_i'].plot(x='voltage', hue='theta') combined_ds.to_netcdf('se_combined.nc') ###Output _____no_output_____
SIC_AI_Coding_Exercises/SIC_AI_Chapter_08_Coding_Exercises/ex_0704a.ipynb
###Markdown Coding Exercise 0704a 1. Softmax regression to recognize the handswritten digits: ###Code import numpy as np import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data # MNIST handwritten digits data! import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 1.1. Download the MNIST data: ###Code # verbosity_saved = tf.logging.get_verbosity() # Save the current verbosity lebel if needed. tf.logging.set_verbosity(tf.logging.ERROR) # Set the verbosity lebel high so that most warnings are ignored. mnist = input_data.read_data_sets("MNIST_data/",one_hot=True) # Download the data. type(mnist) # Check the type. ###Output _____no_output_____ ###Markdown 1.2. Take a look at the dataset: ###Code print("Training data X shape: {}".format((mnist.train.images).shape)) print("Training data y shape: {}".format((mnist.train.labels).shape)) print("Training data cases: {}".format(mnist.train.num_examples)) print("\n") print("Testing data X shape: {}".format((mnist.test.images).shape)) print("Testing data y shape: {}".format((mnist.test.labels).shape)) print("Testing data cases: {}".format(mnist.test.num_examples)) ###Output _____no_output_____ ###Markdown Visualization. ###Code i_image= 1 # Image index. You can change it at will. a_single_image = mnist.train.images[i_image].reshape(28,28) # Reshape as a 2D array. plt.imshow(1-a_single_image, cmap='gist_gray') # Display as grayscale image. plt.show() # Check for the minimum and maximum pixel value. # The data has been min-max-scaled already! print("MIN : {}".format(a_single_image.min())) print("MAX : {}".format(a_single_image.max())) ###Output _____no_output_____ ###Markdown 1.3. Do the necessary definitions: ###Code batch_size = 30 # Size of each (mini) batch. n_epochs = 20000 # Number of epochs. learn_rate = 0.01 # Single layer. # Thus, only one set of (b,W) required. W = tf.Variable(tf.zeros([784,10])) # Input nodes = 784. Output nodes = 10. b = tf.Variable(tf.zeros([10])) # For each output, a bias is required. X_ph = tf.placeholder(tf.float32, [None, 784]) # Indetermined number of cases (observations). Input nodes = 784. y_ph = tf.placeholder(tf.float32,[None,10]) # The response variable has been one-hot-encoded. There are 10 output nodes. # A single layer model. # Not strictly necessary to apply the softmax activation. => in the end we will apply argmax() function to predict the label! # y_model = tf.nn.softmax(tf.matmul(X_ph, W) + b) # The following will work just fine. y_model = tf.matmul(X_ph, W) + b loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_ph, logits=y_model)) # loss = Cross Entropy. optimizer = tf.train.AdamOptimizer(learning_rate = learn_rate) # A better optimizer. # optimizer = tf.train.GradientDescentOptimizer(learning_rate = learn_rate) # A basic optimizer. train = optimizer.minimize(loss) init = tf.global_variables_initializer() ###Output _____no_output_____ ###Markdown 1.4. Training and Testing: ###Code with tf.Session() as sess: # Variables initialization. sess.run(init) # Training. for i in range(n_epochs): batch_X, batch_y = mnist.train.next_batch(batch_size) # Sample a batch! my_feed = {X_ph:batch_X, y_ph:batch_y} sess.run(train, feed_dict = my_feed) if (i + 1) % 2000 == 0: print("Step = {}".format(i + 1)) # Print the step number at every multiple of 2000. # Testing. correct_predictions = tf.equal(tf.argmax(y_ph, axis=1), tf.argmax(y_model, axis=1)) # In argmax(), axis=1 means horizontal direction. accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32)) # Recast the Boolean as float32 first. Then calculate the mean. accuracy_value = sess.run(accuracy, feed_dict={X_ph:mnist.test.images, y_ph:mnist.test.labels}) # Use all of the testing data. ###Output _____no_output_____ ###Markdown Print the testing result. ###Code print("Accuracy = {:5.3f}".format(accuracy_value)) ###Output _____no_output_____
downloaded_kernels/loan_data/kernel_215.ipynb
###Markdown **Python Analysis**Aim is to use this data set in ways that focus on manipulating dates and running calculations in Python.Lending Club lets you buy a slice of a loan when it originates. You can also buy and sell these slices of current loans. When participating in any of these activities, lets try to optimize our return relative to risk. ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt date = ['issue_d', 'last_pymnt_d'] cols = ['issue_d', 'term', 'int_rate','loan_amnt', 'total_pymnt', 'last_pymnt_d','sub_grade','grade','loan_status'] loans = pd.read_csv("../input/loan.csv", low_memory=False, parse_dates=date, usecols = cols, infer_datetime_format=True) #Won't include loans that are Current #Find any loan that started at least 3 years ago if a 3 year loan and at least 5 if 5 year loan latest = loans['issue_d'].max() finished_bool = ((loans['issue_d'] < latest - pd.DateOffset(years=3)) & (loans['term'] == ' 36 months')) | ((loans['issue_d'] < latest - pd.DateOffset(years=5)) & (loans['term'] == ' 60 months')) finished_loans = loans.loc[finished_bool] #ROI and Time Past finished_loans['roi'] = ((finished_loans.total_pymnt / finished_loans.loan_amnt)-1)*100 #Return per unit of risk - B combines return and lower risk print(finished_loans.groupby(['grade'])['roi'].mean()/finished_loans.groupby(['grade'])['roi'].std()) y = finished_loans.groupby(['grade'])['roi'].mean() x = finished_loans.groupby(['grade'])['roi'].std() label = ["A","B","C","D","E","F","G"] fig, ax = plt.subplots() plt.scatter(x, y) plt.axis([0,50,0,12]) ax.set_ylabel('Return') ax.set_xlabel('Standard Deviation') for i in range(len(label)): plt.annotate( s = label[i], xy = (x[i] + .5 , y[i]) ) ###Output _____no_output_____
content/python/pandas/.ipynb_checkpoints/Window-Function&Customisation-checkpoint.ipynb
###Markdown ---title: "Window-Function&Customisation"author: "Palaniappan S"date: 2020-09-04description: "-"type: technical_notedraft: false--- ###Code import numpy as np import scipy.stats import pandas as pd import sklearn df = pd.DataFrame(np.random.randn(10, 4), index = pd.date_range('1/1/2000', periods=10), columns = ['A', 'B', 'C', 'D']) df.rolling(window=3).mean() df.expanding(min_periods=3).mean() df.ewm(com=0.5).mean() pd.get_option("display.max_rows") pd.get_option("display.max_columns") pd.set_option("display.max_rows",80) print (pd.get_option("display.max_rows")) pd.set_option("display.max_columns",30) print (pd.get_option("display.max_columns")) pd.reset_option("display.max_rows") print (pd.get_option("display.max_rows")) pd.describe_option("display.max_rows") with pd.option_context("display.max_rows",10): print(pd.get_option("display.max_rows")) ###Output 10
notebooks/ensemble_adaboost.ipynb
###Markdown Adaptive Boosting (AdaBoost)In this notebook, we present the Adaptive Boosting (AdaBoost) algorithm. Theaim is to get intuitions regarding the internal machinery of AdaBoost andboosting in general.We will load the "penguin" dataset. We will predict penguin species from theculmen length and depth features. ###Code import pandas as pd penguins = pd.read_csv("../datasets/penguins_classification.csv") culmen_columns = ["Culmen Length (mm)", "Culmen Depth (mm)"] target_column = "Species" data, target = penguins[culmen_columns], penguins[target_column] ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. We will purposefully train a shallow decision tree. Since it is shallow,it is unlikely to overfit and some of the training examples will even bemisclassified. ###Code import seaborn as sns from sklearn.tree import DecisionTreeClassifier palette = ["tab:red", "tab:blue", "black"] tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target) ###Output _____no_output_____ ###Markdown We can predict on the same dataset and check which samples are misclassified. ###Code import numpy as np target_predicted = tree.predict(data) misclassified_samples_idx = np.flatnonzero(target != target_predicted) data_misclassified = data.iloc[misclassified_samples_idx] import matplotlib.pyplot as plt from helpers.plotting import DecisionBoundaryDisplay DecisionBoundaryDisplay.from_estimator( tree, data, response_method="predict", cmap="RdBu", alpha=0.5 ) # plot the original dataset sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) # plot the misclassified samples sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Misclassified samples", marker="+", s=150, color="k") plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree predictions \nwith misclassified samples " "highlighted") ###Output _____no_output_____ ###Markdown We observe that several samples have been misclassified by the classifier.We mentioned that boosting relies on creating a new classifier which tries tocorrect these misclassifications. In scikit-learn, learners have aparameter `sample_weight` which forces it to pay more attention tosamples with higher weights during the training.This parameter is set when calling`classifier.fit(X, y, sample_weight=weights)`.We will use this trick to create a new classifier by 'discarding' allcorrectly classified samples and only considering the misclassified samples.Thus, misclassified samples will be assigned a weight of 1 and wellclassified samples will be assigned a weight of 0. ###Code sample_weight = np.zeros_like(target, dtype=int) sample_weight[misclassified_samples_idx] = 1 tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target, sample_weight=sample_weight) DecisionBoundaryDisplay.from_estimator( tree, data, response_method="predict", cmap="RdBu", alpha=0.5 ) sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Previously misclassified samples", marker="+", s=150, color="k") plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree by changing sample weights") ###Output _____no_output_____ ###Markdown We see that the decision function drastically changed. Qualitatively, we seethat the previously misclassified samples are now correctly classified. ###Code target_predicted = tree.predict(data) newly_misclassified_samples_idx = np.flatnonzero(target != target_predicted) remaining_misclassified_samples_idx = np.intersect1d( misclassified_samples_idx, newly_misclassified_samples_idx ) print(f"Number of samples previously misclassified and " f"still misclassified: {len(remaining_misclassified_samples_idx)}") ###Output _____no_output_____ ###Markdown However, we are making mistakes on previously well classified samples. Thus,we get the intuition that we should weight the predictions of each classifierdifferently, most probably by using the number of mistakes each classifieris making.So we could use the classification error to combine both trees. ###Code ensemble_weight = [ (target.shape[0] - len(misclassified_samples_idx)) / target.shape[0], (target.shape[0] - len(newly_misclassified_samples_idx)) / target.shape[0], ] ensemble_weight ###Output _____no_output_____ ###Markdown The first classifier was 94% accurate and the second one 69% accurate.Therefore, when predicting a class, we should trust the first classifierslightly more than the second one. We could use these accuracy values toweight the predictions of each learner.To summarize, boosting learns several classifiers, each of which willfocus more or less on specific samples of the dataset. Boosting is thusdifferent from bagging: here we never resample our dataset, we just assigndifferent weights to the original dataset.Boosting requires some strategy to combine the learners together:* one needs to define a way to compute the weights to be assigned to samples;* one needs to assign a weight to each learner when making predictions.Indeed, we defined a really simple scheme to assign sample weights andlearner weights. However, there are statistical theories (like in AdaBoost)for how these sample and learner weights can be optimally calculated.We will use the AdaBoost classifier implemented in scikit-learn andlook at the underlying decision tree classifiers trained. ###Code from sklearn.ensemble import AdaBoostClassifier base_estimator = DecisionTreeClassifier(max_depth=3, random_state=0) adaboost = AdaBoostClassifier(base_estimator=base_estimator, n_estimators=3, algorithm="SAMME", random_state=0) adaboost.fit(data, target) for boosting_round, tree in enumerate(adaboost.estimators_): plt.figure() DecisionBoundaryDisplay.from_estimator( tree, data, response_method="predict", cmap="RdBu", alpha=0.5 ) sns.scatterplot(x=culmen_columns[0], y=culmen_columns[1], hue=target_column, data=penguins, palette=palette) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title(f"Decision tree trained at round {boosting_round}") print(f"Weight of each classifier: {adaboost.estimator_weights_}") print(f"Error of each classifier: {adaboost.estimator_errors_}") ###Output _____no_output_____ ###Markdown Adaptive Boosting (AdaBoost)In this notebook, we present the Adaptive Boosting (AdaBoost) algorithm. Theaim is to get intuitions regarding the internal machinery of AdaBoost andboosting in general.We will load the "penguin" dataset. We will predict penguin species from theculmen length and depth features. ###Code import pandas as pd penguins = pd.read_csv("../datasets/penguins_classification.csv") culmen_columns = ["Culmen Length (mm)", "Culmen Depth (mm)"] target_column = "Species" data, target = penguins[culmen_columns], penguins[target_column] range_features = { feature_name: (data[feature_name].min() - 1, data[feature_name].max() + 1) for feature_name in data.columns} ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. In addition, we are also using the function used in the previous notebookto plot the decision function of the tree. ###Code import numpy as np import matplotlib.pyplot as plt def plot_decision_function(fitted_classifier, range_features, ax=None): """Plot the boundary of the decision function of a classifier.""" from sklearn.preprocessing import LabelEncoder feature_names = list(range_features.keys()) # create a grid to evaluate all possible samples plot_step = 0.02 xx, yy = np.meshgrid( np.arange(*range_features[feature_names[0]], plot_step), np.arange(*range_features[feature_names[1]], plot_step), ) # compute the associated prediction Z = fitted_classifier.predict(np.c_[xx.ravel(), yy.ravel()]) Z = LabelEncoder().fit_transform(Z) Z = Z.reshape(xx.shape) # make the plot of the boundary and the data samples if ax is None: _, ax = plt.subplots() ax.contourf(xx, yy, Z, alpha=0.4, cmap="RdBu") return ax ###Output _____no_output_____ ###Markdown We will purposefully train a shallow decision tree. Since it is shallow,it is unlikely to overfit and some of the training examples will even bemisclassified. ###Code import seaborn as sns from sklearn.tree import DecisionTreeClassifier palette = ["tab:red", "tab:blue", "black"] tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target) ###Output _____no_output_____ ###Markdown We can predict on the same dataset and check which samples are misclassified. ###Code target_predicted = tree.predict(data) misclassified_samples_idx = np.flatnonzero(target != target_predicted) data_misclassified = data.iloc[misclassified_samples_idx] # plot the original dataset sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) # plot the misclassified samples ax = sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Misclassified samples", marker="+", s=150, color="k") plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree predictions \nwith misclassified samples " "highlighted") ###Output _____no_output_____ ###Markdown We observe that several samples have been misclassified by the classifier.We mentioned that boosting relies on creating a new classifier which tries tocorrect these misclassifications. In scikit-learn, learners have aparameter `sample_weight` which forces it to pay more attention tosamples with higher weights during the training.This parameter is set when calling`classifier.fit(X, y, sample_weight=weights)`.We will use this trick to create a new classifier by 'discarding' allcorrectly classified samples and only considering the misclassified samples.Thus, misclassified samples will be assigned a weight of 1 and wellclassified samples will be assigned a weight of 0. ###Code sample_weight = np.zeros_like(target, dtype=int) sample_weight[misclassified_samples_idx] = 1 tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target, sample_weight=sample_weight) sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) ax = sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Previously misclassified samples", marker="+", s=150, color="k") plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree by changing sample weights") ###Output _____no_output_____ ###Markdown We see that the decision function drastically changed. Qualitatively, we seethat the previously misclassified samples are now correctly classified. ###Code target_predicted = tree.predict(data) newly_misclassified_samples_idx = np.flatnonzero(target != target_predicted) remaining_misclassified_samples_idx = np.intersect1d( misclassified_samples_idx, newly_misclassified_samples_idx ) print(f"Number of samples previously misclassified and " f"still misclassified: {len(remaining_misclassified_samples_idx)}") ###Output _____no_output_____ ###Markdown However, we are making mistakes on previously well classified samples. Thus,we get the intuition that we should weight the predictions of each classifierdifferently, most probably by using the number of mistakes each classifieris making.So we could use the classification error to combine both trees. ###Code ensemble_weight = [ (target.shape[0] - len(misclassified_samples_idx)) / target.shape[0], (target.shape[0] - len(newly_misclassified_samples_idx)) / target.shape[0], ] ensemble_weight ###Output _____no_output_____ ###Markdown The first classifier was 94% accurate and the second one 69% accurate.Therefore, when predicting a class, we should trust the first classifierslightly more than the second one. We could use these accuracy values toweight the predictions of each learner.To summarize, boosting learns several classifiers, each of which willfocus more or less on specific samples of the dataset. Boosting is thusdifferent from bagging: here we never resample our dataset, we just assigndifferent weights to the original dataset.Boosting requires some strategy to combine the learners together:* one needs to define a way to compute the weights to be assigned to samples;* one needs to assign a weight to each learner when making predictions.Indeed, we defined a really simple scheme to assign sample weights andlearner weights. However, there are statistical theories (like in AdaBoost)for how these sample and learner weights can be optimally calculated.**FIXME: I think we should add a reference to ESL here.**We will use the AdaBoost classifier implemented in scikit-learn andlook at the underlying decision tree classifiers trained. ###Code from sklearn.ensemble import AdaBoostClassifier base_estimator = DecisionTreeClassifier(max_depth=3, random_state=0) adaboost = AdaBoostClassifier(base_estimator=base_estimator, n_estimators=3, algorithm="SAMME", random_state=0) adaboost.fit(data, target) for boosting_round, tree in enumerate(adaboost.estimators_): plt.figure() ax = sns.scatterplot(x=culmen_columns[0], y=culmen_columns[1], hue=target_column, data=penguins, palette=palette) plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title(f"Decision tree trained at round {boosting_round}") print(f"Weight of each classifier: {adaboost.estimator_weights_}") print(f"Error of each classifier: {adaboost.estimator_errors_}") ###Output _____no_output_____ ###Markdown Adaptive Boosting (AdaBoost)In this notebook, we present the Adaptive Boosting (AdaBoost) algorithm. Theaim is to get intuitions regarding the internal machinery of AdaBoost andboosting in general.We will load the "penguin" dataset. We will predict penguin species from theculmen length and depth features. ###Code import pandas as pd penguins = pd.read_csv("../datasets/penguins_classification.csv") culmen_columns = ["Culmen Length (mm)", "Culmen Depth (mm)"] target_column = "Species" data, target = penguins[culmen_columns], penguins[target_column] ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. We will purposefully train a shallow decision tree. Since it is shallow,it is unlikely to overfit and some of the training examples will even bemisclassified. ###Code import seaborn as sns from sklearn.tree import DecisionTreeClassifier palette = ["tab:red", "tab:blue", "black"] tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target) ###Output _____no_output_____ ###Markdown We can predict on the same dataset and check which samples are misclassified. ###Code import numpy as np target_predicted = tree.predict(data) misclassified_samples_idx = np.flatnonzero(target != target_predicted) data_misclassified = data.iloc[misclassified_samples_idx] import matplotlib.pyplot as plt from helpers.plotting import DecisionBoundaryDisplay DecisionBoundaryDisplay.from_estimator( tree, data, response_method="predict", cmap="RdBu", alpha=0.5 ) # plot the original dataset sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) # plot the misclassified samples sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Misclassified samples", marker="+", s=150, color="k") plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree predictions \nwith misclassified samples " "highlighted") ###Output _____no_output_____ ###Markdown We observe that several samples have been misclassified by the classifier.We mentioned that boosting relies on creating a new classifier which tries tocorrect these misclassifications. In scikit-learn, learners have aparameter `sample_weight` which forces it to pay more attention tosamples with higher weights during the training.This parameter is set when calling`classifier.fit(X, y, sample_weight=weights)`.We will use this trick to create a new classifier by 'discarding' allcorrectly classified samples and only considering the misclassified samples.Thus, misclassified samples will be assigned a weight of 1 and wellclassified samples will be assigned a weight of 0. ###Code sample_weight = np.zeros_like(target, dtype=int) sample_weight[misclassified_samples_idx] = 1 tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target, sample_weight=sample_weight) DecisionBoundaryDisplay.from_estimator( tree, data, response_method="predict", cmap="RdBu", alpha=0.5 ) sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Previously misclassified samples", marker="+", s=150, color="k") plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree by changing sample weights") ###Output _____no_output_____ ###Markdown We see that the decision function drastically changed. Qualitatively, we seethat the previously misclassified samples are now correctly classified. ###Code target_predicted = tree.predict(data) newly_misclassified_samples_idx = np.flatnonzero(target != target_predicted) remaining_misclassified_samples_idx = np.intersect1d( misclassified_samples_idx, newly_misclassified_samples_idx ) print(f"Number of samples previously misclassified and " f"still misclassified: {len(remaining_misclassified_samples_idx)}") ###Output _____no_output_____ ###Markdown However, we are making mistakes on previously well classified samples. Thus,we get the intuition that we should weight the predictions of each classifierdifferently, most probably by using the number of mistakes each classifieris making.So we could use the classification error to combine both trees. ###Code ensemble_weight = [ (target.shape[0] - len(misclassified_samples_idx)) / target.shape[0], (target.shape[0] - len(newly_misclassified_samples_idx)) / target.shape[0], ] ensemble_weight ###Output _____no_output_____ ###Markdown The first classifier was 94% accurate and the second one 69% accurate.Therefore, when predicting a class, we should trust the first classifierslightly more than the second one. We could use these accuracy values toweight the predictions of each learner.To summarize, boosting learns several classifiers, each of which willfocus more or less on specific samples of the dataset. Boosting is thusdifferent from bagging: here we never resample our dataset, we just assigndifferent weights to the original dataset.Boosting requires some strategy to combine the learners together:* one needs to define a way to compute the weights to be assigned to samples;* one needs to assign a weight to each learner when making predictions.Indeed, we defined a really simple scheme to assign sample weights andlearner weights. However, there are statistical theories (like in AdaBoost)for how these sample and learner weights can be optimally calculated.We will use the AdaBoost classifier implemented in scikit-learn andlook at the underlying decision tree classifiers trained. ###Code from sklearn.ensemble import AdaBoostClassifier base_estimator = DecisionTreeClassifier(max_depth=3, random_state=0) adaboost = AdaBoostClassifier(base_estimator=base_estimator, n_estimators=3, algorithm="SAMME", random_state=0) adaboost.fit(data, target) for boosting_round, tree in enumerate(adaboost.estimators_): plt.figure() # we convert `data` into a NumPy array to avoid a warning raised in scikit-learn DecisionBoundaryDisplay.from_estimator( tree, data.to_numpy(), response_method="predict", cmap="RdBu", alpha=0.5 ) sns.scatterplot(x=culmen_columns[0], y=culmen_columns[1], hue=target_column, data=penguins, palette=palette) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title(f"Decision tree trained at round {boosting_round}") print(f"Weight of each classifier: {adaboost.estimator_weights_}") print(f"Error of each classifier: {adaboost.estimator_errors_}") ###Output _____no_output_____ ###Markdown Adaptive Boosting (AdaBoost)In this notebook, we present the Adaptive Boosting (AdaBoost) algorithm. Theaim is to get intuitions regarding the internal machinery of AdaBoost andboosting in general.We will load the "penguin" dataset. We will predict penguin species from theculmen length and depth features. ###Code import pandas as pd penguins = pd.read_csv("../datasets/penguins_classification.csv") culmen_columns = ["Culmen Length (mm)", "Culmen Depth (mm)"] target_column = "Species" data, target = penguins[culmen_columns], penguins[target_column] range_features = { feature_name: (data[feature_name].min() - 1, data[feature_name].max() + 1) for feature_name in data.columns} ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. In addition, we are also using the function used in the previous notebookto plot the decision function of the tree. ###Code import numpy as np import matplotlib.pyplot as plt def plot_decision_function(fitted_classifier, range_features, ax=None): """Plot the boundary of the decision function of a classifier.""" from sklearn.preprocessing import LabelEncoder feature_names = list(range_features.keys()) # create a grid to evaluate all possible samples plot_step = 0.02 xx, yy = np.meshgrid( np.arange(*range_features[feature_names[0]], plot_step), np.arange(*range_features[feature_names[1]], plot_step), ) # compute the associated prediction Z = fitted_classifier.predict(np.c_[xx.ravel(), yy.ravel()]) Z = LabelEncoder().fit_transform(Z) Z = Z.reshape(xx.shape) # make the plot of the boundary and the data samples if ax is None: _, ax = plt.subplots() ax.contourf(xx, yy, Z, alpha=0.4, cmap="RdBu") return ax ###Output _____no_output_____ ###Markdown We will purposefully train a shallow decision tree. Since it is shallow,it is unlikely to overfit and some of the training examples will even bemisclassified. ###Code import seaborn as sns from sklearn.tree import DecisionTreeClassifier palette = ["tab:red", "tab:blue", "black"] tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target) ###Output _____no_output_____ ###Markdown We can predict on the same dataset and check which samples are misclassified. ###Code target_predicted = tree.predict(data) misclassified_samples_idx = np.flatnonzero(target != target_predicted) data_misclassified = data.iloc[misclassified_samples_idx] # plot the original dataset sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) # plot the misclassified samples ax = sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Misclassified samples", marker="+", s=150, color="k") plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree predictions \nwith misclassified samples " "highlighted") ###Output _____no_output_____ ###Markdown We observe that several samples have been misclassified by the classifier.We mentioned that boosting relies on creating a new classifier which tries tocorrect these misclassifications. In scikit-learn, learners have aparameter `sample_weight` which forces it to pay more attention tosamples with higher weights during the training.This parameter is set when calling`classifier.fit(X, y, sample_weight=weights)`.We will use this trick to create a new classifier by 'discarding' allcorrectly classified samples and only considering the misclassified samples.Thus, misclassified samples will be assigned a weight of 1 and wellclassified samples will be assigned a weight of 0. ###Code sample_weight = np.zeros_like(target, dtype=int) sample_weight[misclassified_samples_idx] = 1 tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target, sample_weight=sample_weight) sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) ax = sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Previously misclassified samples", marker="+", s=150, color="k") plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree by changing sample weights") ###Output _____no_output_____ ###Markdown We see that the decision function drastically changed. Qualitatively, we seethat the previously misclassified samples are now correctly classified. ###Code target_predicted = tree.predict(data) newly_misclassified_samples_idx = np.flatnonzero(target != target_predicted) remaining_misclassified_samples_idx = np.intersect1d( misclassified_samples_idx, newly_misclassified_samples_idx ) print(f"Number of samples previously misclassified and " f"still misclassified: {len(remaining_misclassified_samples_idx)}") ###Output _____no_output_____ ###Markdown However, we are making mistakes on previously well classified samples. Thus,we get the intuition that we should weight the predictions of each classifierdifferently, most probably by using the number of mistakes each classifieris making.So we could use the classification error to combine both trees. ###Code ensemble_weight = [ (target.shape[0] - len(misclassified_samples_idx)) / target.shape[0], (target.shape[0] - len(newly_misclassified_samples_idx)) / target.shape[0], ] ensemble_weight ###Output _____no_output_____ ###Markdown The first classifier was 94% accurate and the second one 69% accurate.Therefore, when predicting a class, we should trust the first classifierslightly more than the second one. We could use these accuracy values toweight the predictions of each learner.To summarize, boosting learns several classifiers, each of which willfocus more or less on specific samples of the dataset. Boosting is thusdifferent from bagging: here we never resample our dataset, we just assigndifferent weights to the original dataset.Boosting requires some strategy to combine the learners together:* one needs to define a way to compute the weights to be assigned to samples;* one needs to assign a weight to each learner when making predictions.Indeed, we defined a really simple scheme to assign sample weights andlearner weights. However, there are statistical theories (like in AdaBoost)for how these sample and learner weights can be optimally calculated.We will use the AdaBoost classifier implemented in scikit-learn andlook at the underlying decision tree classifiers trained. ###Code from sklearn.ensemble import AdaBoostClassifier base_estimator = DecisionTreeClassifier(max_depth=3, random_state=0) adaboost = AdaBoostClassifier(base_estimator=base_estimator, n_estimators=3, algorithm="SAMME", random_state=0) adaboost.fit(data, target) for boosting_round, tree in enumerate(adaboost.estimators_): plt.figure() ax = sns.scatterplot(x=culmen_columns[0], y=culmen_columns[1], hue=target_column, data=penguins, palette=palette) plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title(f"Decision tree trained at round {boosting_round}") print(f"Weight of each classifier: {adaboost.estimator_weights_}") print(f"Error of each classifier: {adaboost.estimator_errors_}") ###Output _____no_output_____ ###Markdown Adaptive Boosting (AdaBoost)In this notebook, we present the Adaptive Boosting (AdaBoost) algorithm. Theaim is to get intuitions regarding the internal machinery of AdaBoost andboosting in general.We will load the "penguin" dataset. We will predict penguin species from theculmen length and depth features. ###Code import pandas as pd penguins = pd.read_csv("../datasets/penguins_classification.csv") culmen_columns = ["Culmen Length (mm)", "Culmen Depth (mm)"] target_column = "Species" data, target = penguins[culmen_columns], penguins[target_column] ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. We will purposefully train a shallow decision tree. Since it is shallow,it is unlikely to overfit and some of the training examples will even bemisclassified. ###Code import seaborn as sns from sklearn.tree import DecisionTreeClassifier palette = ["tab:red", "tab:blue", "black"] tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target) ###Output _____no_output_____ ###Markdown We can predict on the same dataset and check which samples are misclassified. ###Code import numpy as np target_predicted = tree.predict(data) misclassified_samples_idx = np.flatnonzero(target != target_predicted) data_misclassified = data.iloc[misclassified_samples_idx] import matplotlib.pyplot as plt from helpers.plotting import DecisionBoundaryDisplay DecisionBoundaryDisplay.from_estimator( tree, data, response_method="predict", cmap="RdBu", alpha=0.5 ) # plot the original dataset sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) # plot the misclassified samples sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Misclassified samples", marker="+", s=150, color="k") plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree predictions \nwith misclassified samples " "highlighted") ###Output _____no_output_____ ###Markdown We observe that several samples have been misclassified by the classifier.We mentioned that boosting relies on creating a new classifier which tries tocorrect these misclassifications. In scikit-learn, learners have aparameter `sample_weight` which forces it to pay more attention tosamples with higher weights during the training.This parameter is set when calling`classifier.fit(X, y, sample_weight=weights)`.We will use this trick to create a new classifier by 'discarding' allcorrectly classified samples and only considering the misclassified samples.Thus, misclassified samples will be assigned a weight of 1 and wellclassified samples will be assigned a weight of 0. ###Code sample_weight = np.zeros_like(target, dtype=int) sample_weight[misclassified_samples_idx] = 1 tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target, sample_weight=sample_weight) DecisionBoundaryDisplay.from_estimator( tree, data, response_method="predict", cmap="RdBu", alpha=0.5 ) sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Previously misclassified samples", marker="+", s=150, color="k") plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree by changing sample weights") ###Output _____no_output_____ ###Markdown We see that the decision function drastically changed. Qualitatively, we seethat the previously misclassified samples are now correctly classified. ###Code target_predicted = tree.predict(data) newly_misclassified_samples_idx = np.flatnonzero(target != target_predicted) remaining_misclassified_samples_idx = np.intersect1d( misclassified_samples_idx, newly_misclassified_samples_idx ) print(f"Number of samples previously misclassified and " f"still misclassified: {len(remaining_misclassified_samples_idx)}") ###Output _____no_output_____ ###Markdown However, we are making mistakes on previously well classified samples. Thus,we get the intuition that we should weight the predictions of each classifierdifferently, most probably by using the number of mistakes each classifieris making.So we could use the classification error to combine both trees. ###Code ensemble_weight = [ (target.shape[0] - len(misclassified_samples_idx)) / target.shape[0], (target.shape[0] - len(newly_misclassified_samples_idx)) / target.shape[0], ] ensemble_weight ###Output _____no_output_____ ###Markdown The first classifier was 94% accurate and the second one 69% accurate.Therefore, when predicting a class, we should trust the first classifierslightly more than the second one. We could use these accuracy values toweight the predictions of each learner.To summarize, boosting learns several classifiers, each of which willfocus more or less on specific samples of the dataset. Boosting is thusdifferent from bagging: here we never resample our dataset, we just assigndifferent weights to the original dataset.Boosting requires some strategy to combine the learners together:* one needs to define a way to compute the weights to be assigned to samples;* one needs to assign a weight to each learner when making predictions.Indeed, we defined a really simple scheme to assign sample weights andlearner weights. However, there are statistical theories (like in AdaBoost)for how these sample and learner weights can be optimally calculated.We will use the AdaBoost classifier implemented in scikit-learn andlook at the underlying decision tree classifiers trained. ###Code from sklearn.ensemble import AdaBoostClassifier base_estimator = DecisionTreeClassifier(max_depth=3, random_state=0) adaboost = AdaBoostClassifier(base_estimator=base_estimator, n_estimators=3, algorithm="SAMME", random_state=0) adaboost.fit(data, target) for boosting_round, tree in enumerate(adaboost.estimators_): plt.figure() # we convert `data` into a NumPy array to avoid a warning raised in scikit-learn DecisionBoundaryDisplay.from_estimator( tree, data.to_numpy(), response_method="predict", cmap="RdBu", alpha=0.5 ) sns.scatterplot(x=culmen_columns[0], y=culmen_columns[1], hue=target_column, data=penguins, palette=palette) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title(f"Decision tree trained at round {boosting_round}") print(f"Weight of each classifier: {adaboost.estimator_weights_}") print(f"Error of each classifier: {adaboost.estimator_errors_}") ###Output _____no_output_____ ###Markdown Adaptive Boosting (AdaBoost)In this notebook, we present the Adaptive Boosting (AdaBoost) algorithm. Theaim is to get intuitions regarding the internal machinery of AdaBoost andboosting in general.We will load the "penguin" dataset. We will predict penguin species from theculmen length and depth features. ###Code import pandas as pd penguins = pd.read_csv("../datasets/penguins_classification.csv") culmen_columns = ["Culmen Length (mm)", "Culmen Depth (mm)"] target_column = "Species" data, target = penguins[culmen_columns], penguins[target_column] range_features = { feature_name: (data[feature_name].min() - 1, data[feature_name].max() + 1) for feature_name in data.columns} ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. In addition, we are also using the function used in the previous notebookto plot the decision function of the tree. ###Code import numpy as np import matplotlib.pyplot as plt def plot_decision_function(fitted_classifier, range_features, ax=None): """Plot the boundary of the decision function of a classifier.""" from sklearn.preprocessing import LabelEncoder feature_names = list(range_features.keys()) # create a grid to evaluate all possible samples plot_step = 0.02 xx, yy = np.meshgrid( np.arange(*range_features[feature_names[0]], plot_step), np.arange(*range_features[feature_names[1]], plot_step), ) # compute the associated prediction Z = fitted_classifier.predict(np.c_[xx.ravel(), yy.ravel()]) Z = LabelEncoder().fit_transform(Z) Z = Z.reshape(xx.shape) # make the plot of the boundary and the data samples if ax is None: _, ax = plt.subplots() ax.contourf(xx, yy, Z, alpha=0.4, cmap="RdBu") return ax ###Output _____no_output_____ ###Markdown We will purposefully train a shallow decision tree. Since it is shallow,it is unlikely to overfit and some of the training examples will even bemisclassified. ###Code import seaborn as sns from sklearn.tree import DecisionTreeClassifier palette = ["tab:red", "tab:blue", "black"] tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target) ###Output _____no_output_____ ###Markdown We can predict on the same dataset and check which samples are misclassified. ###Code target_predicted = tree.predict(data) misclassified_samples_idx = np.flatnonzero(target != target_predicted) data_misclassified = data.iloc[misclassified_samples_idx] # plot the original dataset sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) # plot the misclassified samples ax = sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Misclassified samples", marker="+", s=150, color="k") plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree predictions \nwith misclassified samples " "highlighted") ###Output /home/explore/miniconda3/envs/scikit-learn-course/lib/python3.9/site-packages/seaborn/relational.py:651: UserWarning: You passed a edgecolor/edgecolors ('w') for an unfilled marker ('+'). Matplotlib is ignoring the edgecolor in favor of the facecolor. This behavior may change in the future. points = ax.scatter(*args, **kws) ###Markdown We observe that several samples have been misclassified by the classifier.We mentioned that boosting relies on creating a new classifier which tries tocorrect these misclassifications. In scikit-learn, learners have aparameter `sample_weight` which forces it to pay more attention tosamples with higher weights during the training.This parameter is set when calling`classifier.fit(X, y, sample_weight=weights)`.We will use this trick to create a new classifier by 'discarding' allcorrectly classified samples and only considering the misclassified samples.Thus, misclassified samples will be assigned a weight of 1 and wellclassified samples will be assigned a weight of 0. ###Code sample_weight = np.zeros_like(target, dtype=int) sample_weight[misclassified_samples_idx] = 1 tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target, sample_weight=sample_weight) sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) ax = sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Previously misclassified samples", marker="+", s=150, color="k") plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree by changing sample weights") ###Output /home/explore/miniconda3/envs/scikit-learn-course/lib/python3.9/site-packages/seaborn/relational.py:651: UserWarning: You passed a edgecolor/edgecolors ('w') for an unfilled marker ('+'). Matplotlib is ignoring the edgecolor in favor of the facecolor. This behavior may change in the future. points = ax.scatter(*args, **kws) ###Markdown We see that the decision function drastically changed. Qualitatively, we seethat the previously misclassified samples are now correctly classified. ###Code target_predicted = tree.predict(data) newly_misclassified_samples_idx = np.flatnonzero(target != target_predicted) remaining_misclassified_samples_idx = np.intersect1d( misclassified_samples_idx, newly_misclassified_samples_idx ) print(f"Number of samples previously misclassified and " f"still misclassified: {len(remaining_misclassified_samples_idx)}") ###Output Number of samples previously misclassified and still misclassified: 0 ###Markdown However, we are making mistakes on previously well classified samples. Thus,we get the intuition that we should weight the predictions of each classifierdifferently, most probably by using the number of mistakes each classifieris making.So we could use the classification error to combine both trees. ###Code ensemble_weight = [ (target.shape[0] - len(misclassified_samples_idx)) / target.shape[0], (target.shape[0] - len(newly_misclassified_samples_idx)) / target.shape[0], ] ensemble_weight ###Output _____no_output_____ ###Markdown The first classifier was 94% accurate and the second one 69% accurate.Therefore, when predicting a class, we should trust the first classifierslightly more than the second one. We could use these accuracy values toweight the predictions of each learner.To summarize, boosting learns several classifiers, each of which willfocus more or less on specific samples of the dataset. Boosting is thusdifferent from bagging: here we never resample our dataset, we just assigndifferent weights to the original dataset.Boosting requires some strategy to combine the learners together:* one needs to define a way to compute the weights to be assigned to samples;* one needs to assign a weight to each learner when making predictions.Indeed, we defined a really simple scheme to assign sample weights andlearner weights. However, there are statistical theories (like in AdaBoost)for how these sample and learner weights can be optimally calculated.We will use the AdaBoost classifier implemented in scikit-learn andlook at the underlying decision tree classifiers trained. ###Code from sklearn.ensemble import AdaBoostClassifier base_estimator = DecisionTreeClassifier(max_depth=3, random_state=0) adaboost = AdaBoostClassifier(base_estimator=base_estimator, n_estimators=3, algorithm="SAMME", random_state=0) adaboost.fit(data, target) for boosting_round, tree in enumerate(adaboost.estimators_): plt.figure() ax = sns.scatterplot(x=culmen_columns[0], y=culmen_columns[1], hue=target_column, data=penguins, palette=palette) plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title(f"Decision tree trained at round {boosting_round}") print(f"Weight of each classifier: {adaboost.estimator_weights_}") print(f"Error of each classifier: {adaboost.estimator_errors_}") ###Output Error of each classifier: [0.05263158 0.05864198 0.08787269] ###Markdown Adaptive Boosting (AdaBoost)In this notebook, we present the Adaptive Boosting (AdaBoost) algorithm. Theaim is to get intuitions regarding the internal machinery of AdaBoost andboosting in general.We will load the "penguin" dataset. We will predict penguin species from theculmen length and depth features. ###Code import pandas as pd penguins = pd.read_csv("../datasets/penguins_classification.csv") culmen_columns = ["Culmen Length (mm)", "Culmen Depth (mm)"] target_column = "Species" data, target = penguins[culmen_columns], penguins[target_column] range_features = { feature_name: (data[feature_name].min() - 1, data[feature_name].max() + 1) for feature_name in data.columns} ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. In addition, we are also using the function used in the previous notebookto plot the decision function of the tree. ###Code import numpy as np import matplotlib.pyplot as plt def plot_decision_function(fitted_classifier, range_features, ax=None): """Plot the boundary of the decision function of a classifier.""" from sklearn.preprocessing import LabelEncoder feature_names = list(range_features.keys()) # create a grid to evaluate all possible samples plot_step = 0.02 xx, yy = np.meshgrid( np.arange(*range_features[feature_names[0]], plot_step), np.arange(*range_features[feature_names[1]], plot_step), ) # compute the associated prediction Z = fitted_classifier.predict(np.c_[xx.ravel(), yy.ravel()]) Z = LabelEncoder().fit_transform(Z) Z = Z.reshape(xx.shape) # make the plot of the boundary and the data samples if ax is None: _, ax = plt.subplots() ax.contourf(xx, yy, Z, alpha=0.4, cmap="RdBu") return ax ###Output _____no_output_____ ###Markdown We will purposefully train a shallow decision tree. Since it is shallow,it is unlikely to overfit and some of the training examples will even bemisclassified. ###Code import seaborn as sns from sklearn.tree import DecisionTreeClassifier palette = ["tab:red", "tab:blue", "black"] tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target) ###Output _____no_output_____ ###Markdown We can predict on the same dataset and check which samples are misclassified. ###Code target_predicted = tree.predict(data) misclassified_samples_idx = np.flatnonzero(target != target_predicted) data_misclassified = data.iloc[misclassified_samples_idx] # plot the original dataset sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) # plot the misclassified samples ax = sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Misclassified samples", marker="+", s=150, color="k") plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree predictions \nwith misclassified samples " "highlighted") ###Output _____no_output_____ ###Markdown We observe that several samples have been misclassified by the classifier.We mentioned that boosting relies on creating a new classifier which tries tocorrect these misclassifications. In scikit-learn, learners have aparameter `sample_weight` which forces it to pay more attention tosamples with higher weights during the training.This parameter is set when calling`classifier.fit(X, y, sample_weight=weights)`.We will use this trick to create a new classifier by 'discarding' allcorrectly classified samples and only considering the misclassified samples.Thus, misclassified samples will be assigned a weight of 1 and wellclassified samples will be assigned a weight of 0. ###Code sample_weight = np.zeros_like(target, dtype=int) sample_weight[misclassified_samples_idx] = 1 tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target, sample_weight=sample_weight) sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) ax = sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Previously misclassified samples", marker="+", s=150, color="k") plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree by changing sample weights") ###Output _____no_output_____ ###Markdown We see that the decision function drastically changed. Qualitatively, we seethat the previously misclassified samples are now correctly classified. ###Code target_predicted = tree.predict(data) newly_misclassified_samples_idx = np.flatnonzero(target != target_predicted) remaining_misclassified_samples_idx = np.intersect1d( misclassified_samples_idx, newly_misclassified_samples_idx ) print(f"Number of samples previously misclassified and " f"still misclassified: {len(remaining_misclassified_samples_idx)}") ###Output _____no_output_____ ###Markdown However, we are making mistakes on previously well classified samples. Thus,we get the intuition that we should weight the predictions of each classifierdifferently, most probably by using the number of mistakes each classifieris making.So we could use the classification error to combine both trees. ###Code ensemble_weight = [ (target.shape[0] - len(misclassified_samples_idx)) / target.shape[0], (target.shape[0] - len(newly_misclassified_samples_idx)) / target.shape[0], ] ensemble_weight ###Output _____no_output_____ ###Markdown The first classifier was 94% accurate and the second one 69% accurate.Therefore, when predicting a class, we should trust the first classifierslightly more than the second one. We could use these accuracy values toweight the predictions of each learner.To summarize, boosting learns several classifiers, each of which willfocus more or less on specific samples of the dataset. Boosting is thusdifferent from bagging: here we never resample our dataset, we just assigndifferent weights to the original dataset.Boosting requires some strategy to combine the learners together:* one needs to define a way to compute the weights to be assigned to samples;* one needs to assign a weight to each learner when making predictions.Indeed, we defined a really simple scheme to assign sample weights andlearner weights. However, there are statistical theories (like in AdaBoost)for how these sample and learner weights can be optimally calculated.We will use the AdaBoost classifier implemented in scikit-learn andlook at the underlying decision tree classifiers trained. ###Code from sklearn.ensemble import AdaBoostClassifier base_estimator = DecisionTreeClassifier(max_depth=3, random_state=0) adaboost = AdaBoostClassifier(base_estimator=base_estimator, n_estimators=3, algorithm="SAMME", random_state=0) adaboost.fit(data, target) for boosting_round, tree in enumerate(adaboost.estimators_): plt.figure() ax = sns.scatterplot(x=culmen_columns[0], y=culmen_columns[1], hue=target_column, data=penguins, palette=palette) plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title(f"Decision tree trained at round {boosting_round}") print(f"Weight of each classifier: {adaboost.estimator_weights_}") print(f"Error of each classifier: {adaboost.estimator_errors_}") ###Output _____no_output_____ ###Markdown Adaptive Boosting (AdaBoost)In this notebook, we present the Adaptive Boosting (AdaBoost) algorithm. Theaim is to get intuitions regarding the internal machinery of AdaBoost andboosting in general.We will load the "penguin" dataset. We will predict penguin species from theculmen length and depth features. ###Code import pandas as pd penguins = pd.read_csv("../datasets/penguins_classification.csv") culmen_columns = ["Culmen Length (mm)", "Culmen Depth (mm)"] target_column = "Species" data, target = penguins[culmen_columns], penguins[target_column] ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. We will purposefully train a shallow decision tree. Since it is shallow,it is unlikely to overfit and some of the training examples will even bemisclassified. ###Code import seaborn as sns from sklearn.tree import DecisionTreeClassifier palette = ["tab:red", "tab:blue", "black"] tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target) ###Output _____no_output_____ ###Markdown We can predict on the same dataset and check which samples are misclassified. ###Code import numpy as np target_predicted = tree.predict(data) misclassified_samples_idx = np.flatnonzero(target != target_predicted) data_misclassified = data.iloc[misclassified_samples_idx] import matplotlib.pyplot as plt from helpers.plotting import DecisionBoundaryDisplay DecisionBoundaryDisplay.from_estimator( tree, data, response_method="predict", cmap="RdBu", alpha=0.5 ) # plot the original dataset sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) # plot the misclassified samples sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Misclassified samples", marker="+", s=150, color="k") plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree predictions \nwith misclassified samples " "highlighted") ###Output _____no_output_____ ###Markdown We observe that several samples have been misclassified by the classifier.We mentioned that boosting relies on creating a new classifier which tries tocorrect these misclassifications. In scikit-learn, learners have aparameter `sample_weight` which forces it to pay more attention tosamples with higher weights during the training.This parameter is set when calling`classifier.fit(X, y, sample_weight=weights)`.We will use this trick to create a new classifier by 'discarding' allcorrectly classified samples and only considering the misclassified samples.Thus, misclassified samples will be assigned a weight of 1 and wellclassified samples will be assigned a weight of 0. ###Code sample_weight = np.zeros_like(target, dtype=int) sample_weight[misclassified_samples_idx] = 1 tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target, sample_weight=sample_weight) DecisionBoundaryDisplay.from_estimator( tree, data, response_method="predict", cmap="RdBu", alpha=0.5 ) sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Previously misclassified samples", marker="+", s=150, color="k") plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree by changing sample weights") ###Output _____no_output_____ ###Markdown We see that the decision function drastically changed. Qualitatively, we seethat the previously misclassified samples are now correctly classified. ###Code target_predicted = tree.predict(data) newly_misclassified_samples_idx = np.flatnonzero(target != target_predicted) remaining_misclassified_samples_idx = np.intersect1d( misclassified_samples_idx, newly_misclassified_samples_idx ) print(f"Number of samples previously misclassified and " f"still misclassified: {len(remaining_misclassified_samples_idx)}") ###Output Number of samples previously misclassified and still misclassified: 0 ###Markdown However, we are making mistakes on previously well classified samples. Thus,we get the intuition that we should weight the predictions of each classifierdifferently, most probably by using the number of mistakes each classifieris making.So we could use the classification error to combine both trees. ###Code ensemble_weight = [ (target.shape[0] - len(misclassified_samples_idx)) / target.shape[0], (target.shape[0] - len(newly_misclassified_samples_idx)) / target.shape[0], ] ensemble_weight ###Output _____no_output_____ ###Markdown The first classifier was 94% accurate and the second one 69% accurate.Therefore, when predicting a class, we should trust the first classifierslightly more than the second one. We could use these accuracy values toweight the predictions of each learner.To summarize, boosting learns several classifiers, each of which willfocus more or less on specific samples of the dataset. Boosting is thusdifferent from bagging: here we never resample our dataset, we just assigndifferent weights to the original dataset.Boosting requires some strategy to combine the learners together:* one needs to define a way to compute the weights to be assigned to samples;* one needs to assign a weight to each learner when making predictions.Indeed, we defined a really simple scheme to assign sample weights andlearner weights. However, there are statistical theories (like in AdaBoost)for how these sample and learner weights can be optimally calculated.We will use the AdaBoost classifier implemented in scikit-learn andlook at the underlying decision tree classifiers trained. ###Code from sklearn.ensemble import AdaBoostClassifier base_estimator = DecisionTreeClassifier(max_depth=3, random_state=0) adaboost = AdaBoostClassifier(base_estimator=base_estimator, n_estimators=3, algorithm="SAMME", random_state=0) adaboost.fit(data, target) for boosting_round, tree in enumerate(adaboost.estimators_): plt.figure() # we convert `data` into a NumPy array to avoid a warning raised in scikit-learn DecisionBoundaryDisplay.from_estimator( tree, data.to_numpy(), response_method="predict", cmap="RdBu", alpha=0.5 ) sns.scatterplot(x=culmen_columns[0], y=culmen_columns[1], hue=target_column, data=penguins, palette=palette) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title(f"Decision tree trained at round {boosting_round}") print(f"Weight of each classifier: {adaboost.estimator_weights_}") print(f"Error of each classifier: {adaboost.estimator_errors_}") ###Output Error of each classifier: [0.05263158 0.05864198 0.08787269] ###Markdown Adaptive Boosting (AdaBoost)In this notebook, we present the Adaptive Boosting (AdaBoost) algorithm. Theaim is to get intuitions regarding the internal machinery of AdaBoost andboosting in general.We will load the "penguin" dataset. We will predict penguin species from theculmen length and depth features. ###Code import pandas as pd penguins = pd.read_csv("../datasets/penguins_classification.csv") culmen_columns = ["Culmen Length (mm)", "Culmen Depth (mm)"] target_column = "Species" data, target = penguins[culmen_columns], penguins[target_column] range_features = { feature_name: (data[feature_name].min() - 1, data[feature_name].max() + 1) for feature_name in data.columns} ###Output _____no_output_____ ###Markdown NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. In addition, we are also using the function used in the previous notebookto plot the decision function of the tree. ###Code import numpy as np import matplotlib.pyplot as plt def plot_decision_function(fitted_classifier, range_features, ax=None): """Plot the boundary of the decision function of a classifier.""" from sklearn.preprocessing import LabelEncoder feature_names = list(range_features.keys()) # create a grid to evaluate all possible samples plot_step = 0.02 xx, yy = np.meshgrid( np.arange(*range_features[feature_names[0]], plot_step), np.arange(*range_features[feature_names[1]], plot_step), ) # compute the associated prediction Z = fitted_classifier.predict(np.c_[xx.ravel(), yy.ravel()]) Z = LabelEncoder().fit_transform(Z) Z = Z.reshape(xx.shape) # make the plot of the boundary and the data samples if ax is None: _, ax = plt.subplots() ax.contourf(xx, yy, Z, alpha=0.4, cmap="RdBu") return ax ###Output _____no_output_____ ###Markdown We will purposefully train a shallow decision tree. Since it is shallow,it is unlikely to overfit and some of the training examples will even bemisclassified. ###Code import seaborn as sns from sklearn.tree import DecisionTreeClassifier palette = ["tab:red", "tab:blue", "black"] tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target) ###Output _____no_output_____ ###Markdown We can predict on the same dataset and check which samples are misclassified. ###Code target_predicted = tree.predict(data) misclassified_samples_idx = np.flatnonzero(target != target_predicted) data_misclassified = data.iloc[misclassified_samples_idx] # plot the original dataset sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) # plot the misclassified samples ax = sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Misclassified samples", marker="+", s=150, color="k") plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree predictions \nwith misclassified samples " "highlighted") ###Output _____no_output_____ ###Markdown We observe that several samples have been misclassified by the classifier.We mentioned that boosting relies on creating a new classifier which tries tocorrect these misclassifications. In scikit-learn, learners have aparameter `sample_weight` which forces it to pay more attention tosamples with higher weights during the training.This parameter is set when calling`classifier.fit(X, y, sample_weight=weights)`.We will use this trick to create a new classifier by 'discarding' allcorrectly classified samples and only considering the misclassified samples.Thus, misclassified samples will be assigned a weight of 1 and wellclassified samples will be assigned a weight of 0. ###Code sample_weight = np.zeros_like(target, dtype=int) sample_weight[misclassified_samples_idx] = 1 tree = DecisionTreeClassifier(max_depth=2, random_state=0) tree.fit(data, target, sample_weight=sample_weight) sns.scatterplot(data=penguins, x=culmen_columns[0], y=culmen_columns[1], hue=target_column, palette=palette) ax = sns.scatterplot(data=data_misclassified, x=culmen_columns[0], y=culmen_columns[1], label="Previously misclassified samples", marker="+", s=150, color="k") plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title("Decision tree by changing sample weights") ###Output _____no_output_____ ###Markdown We see that the decision function drastically changed. Qualitatively, we seethat the previously misclassified samples are now correctly classified. ###Code target_predicted = tree.predict(data) newly_misclassified_samples_idx = np.flatnonzero(target != target_predicted) remaining_misclassified_samples_idx = np.intersect1d( misclassified_samples_idx, newly_misclassified_samples_idx ) print(f"Number of samples previously misclassified and " f"still misclassified: {len(remaining_misclassified_samples_idx)}") ###Output _____no_output_____ ###Markdown However, we are making mistakes on previously well classified samples. Thus,we get the intuition that we should weight the predictions of each classifierdifferently, most probably by using the number of mistakes each classifieris making.So we could use the classification error to combine both trees. ###Code ensemble_weight = [ (target.shape[0] - len(misclassified_samples_idx)) / target.shape[0], (target.shape[0] - len(newly_misclassified_samples_idx)) / target.shape[0], ] ensemble_weight ###Output _____no_output_____ ###Markdown The first classifier was 94% accurate and the second one 69% accurate.Therefore, when predicting a class, we should trust the first classifierslightly more than the second one. We could use these accuracy values toweight the predictions of each learner.To summarize, boosting learns several classifiers, each of which willfocus more or less on specific samples of the dataset. Boosting is thusdifferent from bagging: here we never resample our dataset, we just assigndifferent weights to the original dataset.Boosting requires some strategy to combine the learners together:* one needs to define a way to compute the weights to be assigned to samples;* one needs to assign a weight to each learner when making predictions.Indeed, we defined a really simple scheme to assign sample weights andlearner weights. However, there are statistical theories (like in AdaBoost)for how these sample and learner weights can be optimally calculated.**FIXME: I think we should add a reference to ESL here.**We will use the AdaBoost classifier implemented in scikit-learn andlook at the underlying decision tree classifiers trained. ###Code from sklearn.ensemble import AdaBoostClassifier base_estimator = DecisionTreeClassifier(max_depth=3, random_state=0) adaboost = AdaBoostClassifier(base_estimator=base_estimator, n_estimators=3, algorithm="SAMME", random_state=0) adaboost.fit(data, target) for boosting_round, tree in enumerate(adaboost.estimators_): plt.figure() ax = sns.scatterplot(x=culmen_columns[0], y=culmen_columns[1], hue=target_column, data=penguins, palette=palette) plot_decision_function(tree, range_features, ax=ax) plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left") _ = plt.title(f"Decision tree trained at round {boosting_round}") print(f"Weight of each classifier: {adaboost.estimator_weights_}") print(f"Error of each classifier: {adaboost.estimator_errors_}") ###Output _____no_output_____
days/day08/Display.ipynb
###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7f75f443ca90> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7fe56cf21350> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7f1b98196150> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output _____no_output_____ ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output _____no_output_____ ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display_HTML(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7f26cf7d1a90> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7f9745abca90> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7fb364552a90> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7fe5601a6350> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7f4e3051ca90> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i print(i) ###Output <IPython.core.display.Image object> ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7f08b46f5390> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output _____no_output_____ ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output _____no_output_____ ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) h ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output _____no_output_____ ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output _____no_output_____ ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__(); print(b) ###Output <__main__.Ball object at 0x7fa954179190> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7f0f80438250> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7f47d854da90> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7fccf40aea90> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7f9dd4627a90> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7f85206cba90> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: salmon; font-family: comic sans ms; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7fe80405ea90> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output <__main__.Ball object at 0x7fd0dc754a90> ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output TEST ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____ ###Markdown Display of Rich Output In Python, objects can declare their textual representation using the `__repr__` method. ###Code class Ball(object): pass b = Ball() b.__repr__() print(b) ###Output _____no_output_____ ###Markdown Overriding the `__repr__` method: ###Code class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) ###Output _____no_output_____ ###Markdown IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare **some or all** of these representations; all of them are handled by IPython's *display system*. . Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> ###Output _____no_output_____ ###Markdown You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio("./scrubjay.mp3") ###Output _____no_output_____ ###Markdown A NumPy array can be converted to audio. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur: ###Code import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: ###Code from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('./') ###Output _____no_output_____